Polybase Query Service and Hadoop – Welcome SQL Server 2016

Introduction

One of the coolest features of SQL Server 2016 is Polybase. Already available for Parallel Data Warehouse, this functionality is now integrated in SQL Server 2016 and allows to combine relational and non-relational data, for example, query data in Hadoop and join it with relational data, import external data into SQL Server or export data from the server into Hadoop or Azure Blob Storage. This last case is especially interesting since is possible to transfer old transactions or historical data to a Hadoop file system and dramatically reduce the storage costs.

Setup Polybase

I installed the following components:

After install SQL Server enable TCP/IP connectivity:

enable tcp ip

Verify that the Polybase services are running:

polybase services

Create an external data source

Open a connection to the AdventureworksDW2016CTP3

Polybase connectivity configuration:

sp_configure @configname = 'hadoop connectivity', @configvalue = 7;
GO
RECONFIGURE
GO

‘hadoop connectivity’ is the name of the configuration option. The @configvalue is the corresponding supported Hadoop data source. In my case I selected the 7 corresponding to Hortonworks 2.1, 2.2, and 2.3 on Windows Server. I am using my own Hadoop 2.7.1 and the Hortonworks version is HDP 2.3 and 2.4

More info here:

Hortonworks Products

Polybase Connectivity Configuration

Create external data source script:

CREATE EXTERNAL DATA SOURCE HDP2 WITH
(
    TYPE = HADOOP,
    LOCATION = 'hdfs://localhost:9000'
)

HADOOP is the external data source type and the location is the NameNode URI. You will find this value in <your Hadoop directory>\etc\hadoop\core-site.xml

NameNode URI.jpg

Once the source is created you will find it under “External Data Sources” folder in Management Studio:

External data source.jpg

It is important to remark that the location is not validated when you create the external data source

Create a sample file for this example

Just for demo purposes, create a .csv file and populate it with a query from AdventureworksDW2016CTP3. This is just an example, you can create your own example and also change the file format in the next section accordingly.

Here my query:

SELECT TOP 1000
  [SalesOrderNumber]
 ,[SalesOrderLineNumber]
 ,p.EnglishProductName as ProductName
 ,st.SalesTerritoryCountry
 ,[OrderQuantity]
 ,[UnitPrice]
 ,[ExtendedAmount]
 ,[SalesAmount]
 ,convert(date,[OrderDate]) AS [OrderDate]
FROM [AdventureworksDW2016CTP3].[dbo].[FactInternetSales] a
inner join dbo.DimProduct p on a.ProductKey = p.ProductKey
inner join dbo.DimSalesTerritory st on st.SalesTerritoryKey = a.SalesTerritoryKey

I populated the csv file using Management Studio as follows:

Open the Export wizard: right click on the database name –> Tasks –> Export Data…

Export Data.jpg

Select a data source

select a data source.jpg

Choose a destination

Choose a destination.jpg

Specify a query to select the data to export

specify query

Source query

source query.jpg

Configure flat file destination

configure flat file destination.jpg

Save and run the package

save and run the package.jpg

Export done!

execution finished.jpg

Transfer the csv to HDFS

I created a directory called input in my Hadoop file system and store the csv file in c:\tmp

In case you haven’t done before, to create a directory in HDFS open a command prompt, go to your Hadoop directory and type:

<Your_hadoop-directoy>hadoop fs -mkdir /input

Here is my shell command to move the file from windows file system to HDFS:

<Your_hadoop-directoy>hadoop fs -copyFromLocal c:\tmp\AWExport.csv /input/

Set read and write permissions for other members of your group and others:

<Your_hadoop-directoy>hadoop fs -chmod 777 /input/AWExport.csv

List files in the input directory:

<Your_hadoop-directoy>hadoop fs -ls /input

hdfs commands.jpg

Create an external file format

To create a file format, in a query window in management studio copy and paste the following script:

CREATE EXTERNAL FILE FORMAT SalesExport WITH (
        FORMAT_TYPE = DELIMITEDTEXT,
        FORMAT_OPTIONS (
                FIELD_TERMINATOR =';',
                DATE_FORMAT = 'yyyy-MM-dd' ,
                USE_TYPE_DEFAULT = TRUE
                           )
)

SalesExport is just the name I gave.

The format type is delimited. There are some other types, more info here

The field terminator is the same I used when I exported the data to the flat file.

The date format corresponds also to the format in the flat file

Create an external table

This table references the file stored in HDFS (In my case AWExport.csv). The format corresponds to the structure of the file.

CREATE EXTERNAL TABLE SalesImportcsv
(
    SalesOrderNumber nvarchar(20)
   ,SalesOrderLineNumber tinyint
   ,ProductName nvarchar(50)
   ,SalesTerritoryCountry nvarchar(50)
   ,OrderQuantity smallint
   ,UnitPrice money
   ,ExtendedAmount money
   ,SalesAmount money
   ,OrderDate date
)
WITH
(
   LOCATION = '/input/AWExport.csv',
   DATA_SOURCE = HDP2,
   FILE_FORMAT = SalesExport,
   REJECT_TYPE = value,
   REJECT_VALUE=0
)

Location: location of the file in HDFS.

Data Source: the one created in a previous step.

File Format: also the one created in a previous step.

Reject type: the rejected value is a literal value and not a percentage (the other option is percentage).

Reject value: how many rows could fail. Fail means dirty records, in this context, when a value does not match the column definition.

MSDN Documentation

Query the external table

If everything works you should be able to see the external table in management studio. Then just right click and select the top 1000 records, for example:

select from external table.jpg

Further Topics

  • Insert records in an external table.
  • Configure an external source with credentials.
  • Build a SSIS package to import and export data from Hadoop.
  • View the execution plans of the Polybase queries

References

Advertisements
Posted in Big Data, Business Intelligence, Data Processing Engines, hadoop, SQL Server | Tagged , , , | 2 Comments

My experience building Hadoop 2.7.1 on Windows Server 2012

Introduction

Building the Hadoop sources on windows could be cumbersome even when the official documentation states: “… building a Windows package from the sources is fairly straightforward”. There are several good resources containing the steps needed in order to successfully build a distribution. The most useful for me was this one:

Hadoop 2.7.1 for Windows 10 binary build with Visual Studio 2015 (unofficial)

I solved most of the hurdles with the directions in this blog post (thanks Kplitz Kahran), but I still had to suffer a little bit more. In this post I will show you the additional details I needed to fix.

You can try with the information in the link above and if you have no luck this solutions could help you.

Build winutils project – error C2065: ‘L’: undeclared identifier

I order to solve this problem; I just rewrote this line of code:

const WCHAR* wsceConfigRelativePath = WIDEN_STRING(STRINGIFY(WSCE_CONFIG_DIR)) L”\\” WIDEN_STRING(STRINGIFY(WSCE_CONFIG_FILE));

as:

const WCHAR* wsceConfigRelativePath = STRINGIFY(WSCE_CONFIG_DIR) “\\” STRINGIFY(WSCE_CONFIG_FILE);

Basically the concatenation of these variables are converted explicit to wide character (WCHAR) using a macro. I tested the concatenation without this explicit conversion and it worked. I am not sure why is failing without this, if someone could explain it, I will really appreciate it.

Build native project – LINK Error to libwinutils

The libwinutils library is an external reference of the native project. Verify in the project properties, Linker section, that the “Additional Library Directories” includes the targets of the libwinutils project.

Here a Screencast with the steps above:

 

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin

After solving the previous errors, I thought I was the master of the universe until the next error damaged my enthusiasm again. Luckily the fix proposed by Rushikesh Garadade in this Stack Overflow thread solved the issue:

http://stackoverflow.com/questions/21752279/failed-to-execute-goal-org-apache-maven-pluginsmaven-antrun-plugin1-6-run-pr

After this, the build crashed again but fortunately was just a temporary network error. And finally the happiest image of the day:

BuildHadoop

Hope that helps.

References

Build and Install Hadoop 2.x or newer on Windows

Hadoop 2.7.1 for Windows 10 binary build with Visual Studio 2015 (unofficial)

Working with Strings

Posted in Big Data, hadoop | Tagged , , , , , , | 2 Comments

Apache Kafka 0.8 on Windows

A very helpful step-by-step tutorial to help us learn and play with modern technologies using our windows computer.

JanSchulte.com

Apache Kafka is a scalable, distributed messaging system, which is increasingly getting popular and used by such renowned companies like LinkedIn, Tumblr, Foursquare, Spotify and Netflix [1].

Setting up a Kafka development environment on a Windows machine requires some configuration, so I created this little step-by-step installation tutorial for all the people who want to save themselves from some hours work 😉

View original post 1,497 more words

Posted in Messaging Systems | Tagged , , | Leave a comment

Apache Spark installation on Windows 10

 

Introduction

This post is to help people to install and run Apache Spark in a computer with window 10 (it may also help for prior versions of Windows or even Linux and Mac OS systems), and want to try out and learn how to interact with the engine without spend too many resources. If you really want to build a serious prototype, I strongly recommend to install one of the virtual machines I mentioned in this post a couple of years ago: Hadoop self-learning with pre-configured Virtual Machines or to spend some money in a Hadoop distribution on the cloud. The new version of these VMs come with Spark ready to use.

A few words about Apache Spark

Apache Spark is making a lot of noise in the IT world as a general engine for large-scale data processing, able to run programs up to 100x faster than Hadoop MapReduce, thanks to its in-memory computing capabilities. It is possible to write Spark applications using Java, Python, Scala and R, and it comes with built-in libraries to work with structure data (Spark SQL), graph computation (GraphX), machine learning (MLlib) and streaming (Spark Streaming).

Spark runs on Hadoop, Mesos, in the cloud or as standalone. The latest is the case of this post. We are going to install Spark 1.6.0 as standalone in a computer with a 32-bit Windows 10 installation (my very old laptop). Let’s get started.

Install or update Java

For any application that uses the Java Virtual Machine is always recommended to install the appropriate java version. In this case I just updated my java version as follows:

Start –> All apps –> Java –> Check For Updates

Check java updates

Update Java

 

In the same way you can verify your java version. This is the version I used:

 

about java

Java Version

 

Download Scala

Download from here. Then execute the installer.

I just downloaded the binaries for my system:

download scala

Scala Download

 

 

Download Spark

Select any of the prebuilt version from here

As we are not going to use Hadoop it make no difference the version you choose. I downloaded the following one:

Download spark

Spark Download

 

Feel free also to download the source code and make your own build if you feel comfortable with it.

Extract the files to any location in your drive with enough permissions for your user.

Download winutils.exe

This was the critical point for me, because I downloaded one version and did not work until I realized that there are 64-bits and 32-bits versions of this file. Here you can find them accordingly:

32-bit winutils.exe

64-bit winutils.exe

In order to make my trip still longer, I had to install Git to be able to download the 32-bits winutils.exe. If you know another link where we can found this file you can share it with us.

Git client download (I hope you don’t get stuck in this step)

Extract the folder containing the file winutils.exe to any location of your preference.

Environment Variables Configuration

This is also crucial in order to run some commands without problems using the command prompt.

  • _JAVA_OPTION: I set this variable to the value showed in the figure below. I was getting Java Heap Memory problems with the default values and this fixed this problem.
  • HADOOP_HOME: even when Spark can run without Hadoop, the version I downloaded is prebuilt for Hadoop 2.6 and looks in the code for it. To fix this inconvenient I set this variable to the folder containing the winutils.exe file
  • JAVA_HOME: usually you already set this variable when you install java but it is better to verify that exist and is correct.
  • SCALA_HOME: the bin folder of the Scala location. If you use the standard location from the installer should be the path in the figure below.
  • SPARK_HOME: the bin folder path of where you uncompressed Spark

 

env variables 2

Environment Variables 1/2

env variables 1

Environment Variables 2/2

 

Permissions for the folder tmp/hive

I struggled a little bit with this issue. After I set everything I tried to run the spark-shell from the command line and I was getting an error, which was hard to debug. The shell tries to find the folder tmp/hive and was not able to set the SQL Context.

I look at my C drive and I found that the C:\tmp\hive folder was created. If not you can created by yourself and set the 777 permissions for it. In theory you can do it with the advanced sharing options of the sharing tab in the properties of the folder, but I did it in this way from the command line using winutils:

Open a command prompt as administrator and type:

chmod 777

Set 777 permissions for tmp/hive

 

Please be aware that you need to adjust the path of the winutils.exe above if you saved it to another location.

We are finally done and could start the spark-shell which is an interactive way to analyze data using Scala or Python. In this way we are going also to test our Spark installation.

Using the Scala Shell to run our first example

In the same command prompt go to the Spark folder and type the following command to run the Scala shell:

 

start the spark shell

Start the Spark Scala Shell

 

After some executions line you should be able to see a similar screen:

scala shell.jpg

Shell started

 

You are going to receive several warnings and information in the shell because we have not set different configuration options. By now just ignore them.

Let’s run our first program with the shell, I took the example from the Spark Programming Guide. The first command creates a resilient data set (RDD) from a text file included in the Spark’s root folder. After the RDD is created, the second command just counts the number of items inside:

second command.jpg

Running a Spark Example

 

And that’s it. Hope you can follow my explanation and be able to run this simple example. I wish you a lot of fun with Apache Spark.

References

Why does starting spark-shell fail with NullPointerException on Windows?

Apache Spark checkpoint issue on windows

Configure Standalone Spark on Windows 10

Posted in Data Processing Engines | Tagged , , , , , , , | 33 Comments

Business Intelligence without excuses part 1 – Business Analytics Platform Installation

Disclaimer

This first tutorial is part of a series that I’m planning in order to show how to use Pentaho to build BI applications. The expected audience is people without previous knowledge about Pentaho, for this reason I decided to start from the very beginning. I think and hope that students or professional who want to step into BI will find these tutorials useful.

For experienced Pentaho users I recommend this article to catch on what’s new in BA Server 5.0 CE: A first look to the new Pentaho BA Server 5.0 CE 

Introduction

The renamed Pentaho Analytic Platform is the central component to host the content of our BI application. From the platform it is possible to run and show reports and dashboards, manage security, perform OLAP analysis and many other tasks.

All Pentaho software, except the Pentaho Mobile App, requires the Sun/Oracle version 1.7 distribution of the Java Runtime Environment (JRE) or Java Development Kit (JDK), therefore is essential that Java is installed and at least the variable JRE_HOME or JAVA_HOME should be configured. I show how to set the JAVA_JOME system variable in a Windows environment.

As I mentioned in the first post of this series, the first step is to download the BA Server from:

Pentaho Community 

Pentaho BA Server CE 5.0 installation 

Plugins Installation

Using the Marketplace plugin (comes with the default installation) it is possible to install other useful plugins, which are going to be used to design dashboards, perform OLAP analysis, etc.

Users and Roles

The default installation comes with a set of users and their respective directories. The users with the admin role can see all of the directories. There is also a “Public” folder, where the examples showed in the screencast above are stored.

 

Summary

The Pentaho Business Analytics Platform hosts Pentaho-created and user-created content. It is open source and could easily download and install.  If you are a developer, specially a Java developer, I encourage you to dive and study how is build the whole platform, understand the architecture behind, and why not, to collaborate with the community.

In future posts I will examine in detail some important features and characteristics of the Server

Posted in Business Intelligence, Pentaho | Tagged , , | Leave a comment

Business Intelligence without excuses part 0 – Pentaho CE 5.0

It is well known that not only in companies from different business sectors but also privately, an enormous amount of data is collected every day, every hour, every minute…
The first question always arise is, what could be learned from these data.
There are a variety of technologies in the market to create applications that aim the development of the sequence:

Data -> Information -> Knowledge

I don´t want to discuss which one is better; I have experience with both open source and non-open source tools and I have nothing to complain about. I just want to present you in a series of posts the Pentaho Community Edition products and how to build a complete Business Intelligence application. I’ll try to cover the basics and some advanced tasks, but keep in mind that the tutorials will be intended from people with zero Knowledge of Pentaho. If you are an experienced Pentaho user you could find the tutorials not interesting.

First step is to download and install the Business Analytics Platform:
You can find it here: Pentaho Community

And remember, it is for free, and I’ll try to show you the basics, and at the end you will have NO EXCUSES to profit yourself from Business Intelligence.

Posted in Business Intelligence, Pentaho | Tagged , , | Leave a comment

Hadoop self-learning with pre-configured Virtual Machines

The first obstacle I found when I tried to learn Hadoop is I don’t have a cluster at home and I don’t want to pay for resources in the cloud. Even if you have access to a cluster, setting Hadoop could be an arduous task. There are too many new things to learn that I didn’t want to spend time trying to setting up Hadoop because it could result frustrating.
The good news is there are pre-configured Hadoop virtual machines that will help you to learn by yourself.
Here I listed three options, each one from a different Hadoop vendor. This is not a survey of Hadoop virtual machines, which would be very nice by the way.
The scope of this post is just to give some information about the possibility to learn Hadoop using your laptop or desktop computer.
Hadoop free download pre-configured VM:

Hortonworks Sandbox
Cloudera’s CDH4
MapR M3, M5 and M7

Hope you enjoy learning!

Posted in Big Data, Business Intelligence | Tagged , , | 2 Comments