Pig: Expense Analysis using GROUP Operator

ApachePig

In this post I will demonstrate how to use Pig’s GROUP operator to analyse credit card expenses and determine the top expenses for the year and their percentage of the total expense. This exercise was done in Hortonworks Data Platform (HDP). Continue Reading

HDP: Analysing Twitter Hashtags using Hive

Twitter

Hive implements MapReduce using HiveQL. The built-in capabilities of HiveQL abstracts the implementation of mappers and reducers with a simple yet powerful SQL like query language. To demonstrate the inbuilt capabilities of HiveQL, I will be analysing hashtags from a twitter feed on Hortonworks Data Platform (HDP). Continue Reading

Power Query: Import Data from Web Pages

Power Query can discover and import data from websites. Often data warehouses rely on external data which is readily available in public websites for e.g. public holidays, school holidays, daylight savings, SIC codes, SWIFT codes, post codes etc. Power Query is perfectly suitable for such situations. Power Query can discover, fetch and transform data from a HTML table in a web page into a format that can be easily imported into data warehouses using SSIS package. It’s like an ETL tool for the web page data source.
Continue Reading

Azure: Delete HDInsight Cluster, yet Retain Data

Azure

Storage cluster (HDFS) in Hadoop is also the Processing cluster (MapReduce). Azure provides two different options to store data:

Option 1: Use HDInsight cluster to store data as well as to process MapReduce requests. For e.g. a Hive database hosted in an HDInsight cluster which also executes HiveQL MapReduce queries. In this instance data is stored in the cluster’s HDFS.

Option 2: Use HDInsight cluster to only process MapReduce requests whereas data is stored in Azure blob storage. For e.g. the Hive data is stored in Azure storage while the HDInsight cluster executes HiveQL MapReduce queries. Here the metadata of Hive database is stored in the cluster whereas the actual data is stored in Azure storage. The HDInsight cluster is co-located in the same datacentre as the Azure storage and connected by high speed network.

There are several advantage of using Azure storage (Option 2). Continue Reading

HDInsight: Analysing Twitter Hashtags using Hive

Twitter

Hive implements MapReduce using HiveQL. The built-in capabilities of HiveQL abstracts the implementation of mappers and reducers with a simple yet powerful SQL like query language. To demonstrate the inbuilt capabilities of HiveQL, I will be analysing hashtags from a twitter feed on Azure HDInsight platform.

Continue Reading

SSIS: Creating Dynamic Data-driven SSAS Partitions

Partitions improve the query response of SSAS cube by narrowing the MDX query to a pre-defined subset of data. Usually cubes are developed with few basic partitions to begin with and new partitions are added over time. Partitions are created using XMLA commands. The rest of this post explains the steps to create partitions using metadata and SSIS package.

Continue Reading

SSIS: Dynamic Processing of SSAS Partitions

SSAS Partitions are primarily geared towards better query performance by narrowing the MDX query to a pre-defined subset of data. Each partition must be in processed state for the cube to be available for browsing. However in a delta ETL run, it is more than likely that only some partitions would be refreshed with data while the rest remain unchanged. For example in a hospital admissions cube partitioned by month, data movements are frequent for admissions made during the current/last month compared to earlier months. This means if the partitions updated by the ETL run can be identified, then we can tailor SSIS package to dynamically process only those partitions that require data refresh while leaving the remaining partitions untouched.

Continue Reading

The End Game is Not Just Canned Reports

Canned reports/dashboards aka custom reports or out of box reports are inevitable part of any BI ecosystem. It’s a good point of reference for conversations with business users especially at the start of a project when requirements are often a bit hazy. In many instances, business users see report layouts as an effective and convenient medium of communication with BI project teams. This however should not be considered as THE business requirement. Instead of dwelling on canned reports, the focus should be on building a well-designed DW/BI system that empowers users to perform analysis on a self-service basis. The quality and the range of analysis that a self-service BI platform provides far outweighs the benefits of just rolling out a set of canned reports. It empowers business users to do their own analysis while the BI team can work towards expanding and refining the analytical platform instead of operating as a “reporting team”. Continue Reading

SSIS Package to Process SSAS Cube

The execution plan of SSAS processing engine triggers a sequence of processing tasks based on the object’s current processed state and its dependencies with other objects. This is what happens when the SSAS cube is processed from SQL Server Management Studio (SSMS). It is possible to replicate this execution plan close enough in an SSIS package so that the cube is always processed and refreshed with latest data.

Continue Reading