Running mlflow server using Docker, Azure Blob Service and Azure SQL Database

Introduction

It is indisputable true that mlflow came to make life a lot easier not only for data scientists but also for data engineers, architects among others. There is a very helpful list of tutorials and example in the official mlflow docs. You can just download it, open a console and start using it locally on your computer. This is the fastest way to getting started. However, as soon as you progress and introduce mlflow in your team, or you want to use it extensively for yourself, some components should be deployed outside your laptop.

To exercise a deployment setup and since I own azure experience, I decided to provision a couple of resources in the cloud to deploy the model registry and store the data produced by the tracking server.

The code used during the article is available on github:

General overview

When I finished the diagram below, I noticed the code is located in the middle of everything. However, the code usually is developed locally. Data science teams must go beyond notebooks and operationalize their code. This will enable the integration with applications to deliver its value to end users and machines.

Example architecture overview

Tracking server

The tracking server is basically an API and UI. With the API you can logged parameters, code version, metrics and artifacts. The you can use the UI to query and visualize the experiment results. Experiments are a set of runs, and a run is the execution of a piece of code. The values from the experiments are recorded by default locally in a folder named mlruns in the directory where you call your code as can be seen in the following figure:

mlflow records experiment results locally by default

The results above can also be stored in a SQL Alchemy compatible database. The place where you store this data is called the backend store. In this example I used an Azure SQL Database. The details are described in the next sections.

The clients running experiment stores their artifacts output, i.e., models, in a location called the artifact store. If nothing is configured mlflow uses by default the mlruns directory as shown in the next figure:

Artifact store location

This location should be able to handle large amounts of data. Some different popular cloud providers storage services are supported. In this example Azure Blob Storage is used.

MLflow Projects

A project is just a directory, in this example a git repository, where a descriptor file is placed to specify the dependencies and how the code is executed.

MLflow Models

This module offers a way to unify the deployment of machine learning models. It defines a convention in order to package and share your code.

MLflow Registry

This is one of my favorite modules and is a centralized model repository with a UI and a set of APIs for model lifecycle management. If you run your own MLflow server, a database-backend must be configured. In this example an Azure SQL Database.

Preparing a docker image for the tracking server

One important thing is to make your work shareable and reusable. I really like docker containers because they help me to achieve that.  You can run them locally and also easily deploy them in different ways on different cloud providers.

For that I first tried to directly use the image provided by Yves Callaert. You can find the image in this git repo.

This docker image is created from a python image. The rest is quite simple, just a couple of environment variables, install the required python packages and define an entry point. Unfortunately, as usual, when you start getting away from the default configurations, things get complicated.

This docker image now must be able to connect to an Azure SQL Database using python. There are at least to major packages to achieve that. On is pymssql which seems to be the old way and has some limitation to work with Azure. The other is pyodbc.

The next step is to add pyodbc to the requirements.txt file. But that was not all. In order to work, pyodbc needs the ODBC drivers installed on the image. The new image added the SQL Server ODBC driver 17 for Ubuntu 18.04.

Last thing was to update the requirements file as follows:

python requirements docker image

The entry point is the script startup.sh which a modified as follows:

mlflow server --backend-store-uri "$BACKEND_URI" --default-artifact-root "$MLFLOW_SERVER_DEFAULT_ARTIFACT_ROOT" --host 0.0.0.0

You can find the upgraded code in my github repo.

Once you have downloaded the code just build the image. For instance, using your console, change the directory to the one with the DockerFile and issue:

docker build -t mlflowserver -f Dockerfile . --no-cache

Using blob storage for the tracking server artifact store

AS explain in the architecture overview, an Azure Blob Storage account was crated for the artifact backend. To configure it, you just need to set environment variable AZURE_STORAGE_ACCESS_KEY as follows:

wasbs://<container>@<storage-account>.blob.core.windows.net/<path>

Of course, first create an azure storage account and a container. I create a container named mlflow as shown in the following figure:

Artifact Store in Azure Blob Storage

And then my environment variable became:

MLFLOW_SERVER_DEFAULT_ARTIFACT_ROOT=wasbs://mlflow@mlautomationph271220.blob.core.windows.net

And to access the container from outside just set the storage account connection string environment variable:

AZURE_STORAGE_CONNECTION_STRING = <your azure storage connection string>

Using SQL server for the backend store

I created a serverless Azure SQL Database. A nice thing for testing and prototyping. If you want to change to another pricing model just configure another pricing tier.

From the SQL Server instance I need a user that can create and delete objects. I have not found exactly which permissions this user needs in the documentation but at least it should be able to create and drop tables, foreign keys and constraints. To be honest here, I just used the admin user. I need to investigate a bit deeper on this. When you already have your instance, user and password, you can build your connection string and also assign it to an environment variable as follows:

BACKEND_URI="mssql+pyodbc://<sqlserver user>:<password>@<your server>.database.windows.net:1433/<database name>?driver=ODBC+Driver+17+for+SQL+Server"

Test

In order to test it I used the sklearn_elasticnet_wine example from the mlflow tutorial: Train, serve, and score a linear regression model

It is enough to change a couple of lines in the code to use the tracking server we created:

Add tracking server to the train code in python
  1. Set the tracking server URL, in my case I ran the docker container locally
  2. Set the experiment passing its name as argument. If the experiment doesn’t exist it gets created
  3. Get the experiment Id
  4. Assign the experiment Id to the run

I left everything else as it was.

Now it is time to open the console and run our experiment.

Hint: remember to set the environment variable AZURE_STORAGE_CONNECTION_STRING where you execute the code.

The examples have several python requirement files you need to install depending on the tutorial you want to run. To simplify this I just wrote down my conda environment to a file on the folder “mlflow\examples\sklearn_elasticnet_wine”.

You can easily create a new conda environment using this file issuing:

conda create --name <env-name> --file mlflow\examples\sklearn_elasticnet_wine \requirements.txt

Time to execute the train.py script, from the root directory. I used different input values for the parameters alpha and l1_ratio, starting with 1 and 1:

Running the code

Parameters:

AlphaL1_ratio
11
10.5
0.51
0.250.65
Training parameters

Visualize experiment results using the tracking server UI

If you open the UI of the tracking server using your favorite browser you can visualize the experiment results:

Experiment Results MLflow Tracking Server

If you click on the start time you can open a single run and track code, versions, parameters, metrics and artifacts:

Single Run Results MLflow Tracking Server

If you scroll down to the bottom you can inspect the artifacts:

Experiment Artifacts

We can also verifiy the backend store tables are created in the azure SQL database instance:

For a complete description please refer to the official documentation using the link provide at the beginning of the post.

Deploy the model

If you are still not excited, now comes a very interesting part. Models cannot just stay on your laptop, you need to serve them somehow to applications and integrate them with other software pieces. Deploying the models to a web server as REST APIs is an excellent option to expose them as services.

To demonstrate mlflow deployment capabilities let’s deploy a REST server locally using:

mlflow models serve -m wasbs://mlflow@mlautomationph271220.blob.core.windows.net/0/866a64d8b7de488e83b985bd89d84afe/artifacts/model -p 1234

You need to replace the model location with the actual one. I found it in my previous screenshot:

Model location

Here we go:

Starting REST Server

The server is now running. Since I really like Postman, let´s just test the service with it. I will use the same input data as in the tutorial, which is a JSON-serialized pandas DataFrame:

Test REST Server using Postman

Voila, that´s it. Now we can score incoming data doing a REST call!

Further steps

To get completely away from local development, a vm, docker instance, or another service should be provisioned to run the mlflow docker container.

Also the REST server we created at the end should be deploy outside a laptop.

Once all infrastructure is already provisioned in the cloud, it would be very helpful to have an ARM template to be able to easily replicate and version the complete environment.

References

Posted in Uncategorized | Tagged , , | 1 Comment

Azure Digital Twins Management with Python

Introduction

As mentioned in my previous post, Azure Digital Twins (ADT) Service is a new way to build next generation IoT solutions. In the first post I show you in a video how to manage ADT instances with the ADT Explorer. In the second post I show how to do mostly the same but using Postman and the ADT Rest API.

ADT has control plane APIs and data plane APIs. The latest is used to manage elements in the ADT instance. In order to use these APIs Microsoft published a .Net (C#) SDK. And SDK is a convenient way to manage instances, since you can easily create applications for your digital twins. If for any reason you prefer to use another language like java, javascript or python, you need to generate your own SDK.

In this post I describe how to autogenerate a Python SDK using the tool Autorest and a swagger file.

In this repo an example of a generated SDK could be found: https://github.com/pauldj54/ADTDataPlane

Autorest

Autorest is a tool to generate client libraries for accessing RESTFul web services. It is available in github: https://github.com/Azure/autorest

Note: Autorest prerequisite is Node.js and the version 10.x is recommended. If you need multiple versions of Node.js I recommend you the tool nvm-windows that can be downloaded from this link: https://github.com/coreybutler/nvm-windows/releases

I will use PowerShell with admin rights for the next steps.

Now let’s select the desired Node.js version:

Node.js setup using nvm

The steps shown in the figure above are the following:

  • Print the current versions
  • Since only v12.x was available then install the 10.x version
  • List the versions available to confirm
  • Change the used version to 10.x
  • Confirm the changes.

Note the prefix “*” marking the selected Node.js version.

To install autorest I followed the steps from the official Azure documentation https://docs.microsoft.com/en-us/azure/digital-twins/how-to-create-custom-sdks:

npm install -g autorest
# run using command 'autorest'
autorest

Generate our own SDK

  1. In order to generate our own SDK the Swagger file with the ADT data plane APIs definition is needed and can be downloaded from here . Please be aware that the “examples” folder is also required, if not Autorest throws an error
  2. Place the downloads in a directory in your computer. I created a folder under my git directory with the name “adt_dataplane_api”.
  3. Open a console and navigate to the directory created in the previous step. Issue the following command: 
autorest --input-file=digitaltwins.json --python --output-folder=ADTApi --add-credentials --azure-arm --package-name=ADTApi --package-version=1.0

Basically you point to the swagger file (digitaltwins.json), select python as the output language, enter an output folder, package name and other details

Python SDK generation using Autorest

4. If everything ran successfully you should see the following output:

Resulting folder layout

Converting our own SDK to a python package

It is very convenient to convert the generated SDK in a python package and include it in the environments as needed. In order to do so, I followed these steps:

  1. Create a setup.py file in the “ADTDataPlane” directory:
setup(
      name='adtdataplane',
      version='0.1',
      description='ADT data plane python SDK',
      author='Azure Digital Twin autogenerated using autorest by Paul Hernandez',
      url='https://github.com/pauldj54/ADTDataPlane',
      packages=find_packages(exclude=['tests*'])
      )


2. Add the auto-generated code to git. I did it in my github using these directions

3. Now we are ready to install our newly generated package 😊

Installing the generated SDK using pip

4. Verify the installation:

Python packages available in this environment

Manage an ADT Instance

Once we have our SDK python package available is time to test it. For this post I registered an AAD application (app registration) and I am using the same ADT instance of the previous post.

  1. Find your Application (client) ID and Directory (tenant) ID:
client and tenant ID in the App registrations (Azure Portal)

2. Create a client secret and write it down:

Create a client secret to access your application

3. Grant the role “Azure Digital Twins Owner (Preview)” to the registered app:

Grant the correspondent role

4. Create a config file in the root directory (or another directory) within you python project and name it for instance settings.json . Hint: secrets and other sensible information will be stored in this file, so make sure you don’t push it to git or your source control.

The file should look like this:

{
    "client_id" : "<your-client-id>",
    "tenant_id" : "<your-tenant-id>",
    "adt_instance_url" : "https://management.azure.com",
    "secret" : "<your-secret>",
    "endpoint" : "https://<your-adt-instance>.api.neu.digitaltwins.azure.net",
    "scope" : ["https://digitaltwins.azure.net/.default"],
    "authority" : "https://login.microsoftonline.com/<your-tenant-id>"
}

5. Create a dtdl sample file to test the code. I crated the file “SampleModel.json” quite similar to the one in the official documentation:

{
  "@id": "dtmi:com:contoso:SampleModelPy;1",
  "@type": "Interface",
  "displayName": "SampleModelPy",
  "contents": [
    {
      "@type": "Relationship",
      "name": "contains"
    },
    {
      "@type": "Property",
      "name": "data",
      "schema": "string"
    }
  ],
  "@context": "dtmi:dtdl:context;2"
}

6. Import the following modules and install them if required:

import msal 
from msrestazure.azure_active_directory import AADTokenCredentials
import adtdataplane 
import logging
from azure.mgmt.consumption.models.error_response import ErrorResponseException
import json

msal is the Microsoft Authentication Library and is the preferred library according to the documentation. AADTokenCreadentials is the class used to build the credentials, adtdataplane is our generated sdk. Some other packages are required by the code.

7. Load the config file and create a confidential client application as follows:

# Load Config file
with open(r"settings.json") as f:
  config = json.load(f)

# Create a preferably long-lived app instance that maintains a token cache.
app = msal.ConfidentialClientApplication(
    config["client_id"], authority=config["authority"],
    client_credential=config["secret"],
    # token_cache=...  # Default cache is in memory only.
                       # You can learn how to use SerializableTokenCache from
                       # https://msal-python.rtfd.io/en/latest/#msal.SerializableTokenCache
    )


8. I used this code snippet from the azure python sdk examples to obtain a token:

# The pattern to acquire a token looks like this.
result = None

# First, the code looks up a token from the cache.
# Because we're looking for a token for the current app, not for a user,
# use None for the account parameter.
result = app.acquire_token_silent(config["scope"], account=None)

if not result:
    logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
    result = app.acquire_token_for_client(scopes=config["scope"])

if "access_token" in result:
    # Call a protected API with the access token.
    print(result["token_type"], result["access_token"])
else:
    print(result.get("error"))
    print(result.get("error_description"))
    print(result.get("correlation_id"))  # You might need this when reporting a bug. 

9. I transform the acquired toke in AAD token credentials and create an SDK client:

credentials = AADTokenCredentials(result)

try:
    client = adtdataplane.AzureDigitalTwinsAPI(credentials = credentials, base_url = config['endpoint'])
    logging.info("Service client created – ready to go")
except ValueError as err:
    print('Client creation failed with error: {0}'.format(err))

10) Now we can load a dtdl model:

# load models
with open(r"models\SampleModel.json") as f:
  dtdl = json.load(f)
dtdl_list = []
dtdl_list.append(dtdl)
try:
  response = client.digital_twin_models.add(model = dtdl_list, raw=True)
  print(response)
except adtdataplane.models.ErrorResponseException as e:
  print(e)

Please notice the model location and modify it accordingly

11. Verify if the model was created

# Verify the model was created
response = client.digital_twin_models.get_by_id('dtmi:com:contoso:SampleModelPy;1')
print(response)

You should see something like this:

DTDL model retrieved

12. We could also verify that the model was correctly upload using the ADT Explorer:

ADT Explorer available DTDL models
ADT Explorer sample model definition

The entire python code:

import msal 
from msrestazure.azure_active_directory import AADTokenCredentials
import adtdataplane 
import logging
from azure.mgmt.consumption.models.error_response import ErrorResponseException
import json

# Load Config file
with open(r"settings.json") as f:
  config = json.load(f)

# Create a preferably long-lived app instance that maintains a token cache.
app = msal.ConfidentialClientApplication(
    config["client_id"], authority=config["authority"],
    client_credential=config["secret"],
    # token_cache=...  # Default cache is in memory only.
                       # You can learn how to use SerializableTokenCache from
                       # https://msal-python.rtfd.io/en/latest/#msal.SerializableTokenCache
    )

# The pattern to acquire a token looks like this.
result = None

# First, the code looks up a token from the cache.
# Because we're looking for a token for the current app, not for a user,
# use None for the account parameter.
result = app.acquire_token_silent(config["scope"], account=None)

if not result:
    logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
    result = app.acquire_token_for_client(scopes=config["scope"])

if "access_token" in result:
    # Call a protected API with the access token.
    print(result["token_type"], result["access_token"])
else:
    print(result.get("error"))
    print(result.get("error_description"))
    print(result.get("correlation_id"))  # You might need this when reporting a bug. 

credentials = AADTokenCredentials(result)

try:
    client = adtdataplane.AzureDigitalTwinsAPI(credentials = credentials, base_url = config['endpoint'])
    logging.info("Service client created – ready to go")
except ValueError as err:
    print('Client creation failed with error: {0}'.format(err))

# load models
with open(r"models\SampleModel.json") as f:
  dtdl = json.load(f)
dtdl_list = []
dtdl_list.append(dtdl)
try:
  response = client.digital_twin_models.add(model = dtdl_list, raw=True)
  print(response)
except adtdataplane.models.ErrorResponseException as e:
  print(e)

# Verify the model was created
response = client.digital_twin_models.get_by_id('dtmi:com:contoso:SampleModelPy;1')
print(response)

Next steps

Even when the scenario presented in this post is extremely basic, now you have a python SDK to manage ADT instances. The benefit is more obvious when you have some data sets and want to populate your twins. In a next post I would like to show you how to write an ingestion program using the python SDK and a data source, most probably a CSV or a JSON file, let’s see what the open data world offers to us.

References

Posted in azure, python | Tagged , , , , | 1 Comment

Azure Digital Twins Management with Postman

In this video I will show how to manage Azure Digital Twins models and instances using Postman. How to use the ADT explorer is explained in my previous post: https://hernandezpaul.wordpress.com/2020/07/24/azure-digital-twins-and-adt-explorer-say-hello

ADT Management using Postman

In order to make the postman collection work you need to configure an environment as follows:

Postman Environment required

tenantId = your tenant Id, it could be found in the registered app

accessToken = will be populated withing a script

adtInstanceURL = the hostname of your ADT instance

clientId = as in your registered app

clientSecret = the one you generated in the registered app (see video)

scope = https://digitaltwins.azure.net/.default

You can find all the DTDL models and the postman collection in this repository:

https://github.com/pauldj54/adt-agrifood

The Swagger file of the ADT Management API:

https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/preview/2020-03-01-preview

Posted in azure | Tagged , , , , , | 3 Comments

Azure Digital Twins and ADT Explorer – say hello!

ADT Explorer evaluation

Introduction

Azure Digital Twins Service offers a way to build next generation IoT solutions. There are other approaches on the market to describe IoT devices and build digital twins. Without making a formal comparison I can say with the Azure Digital Twins is possible to build a powerful semantic layer on top of your connected devices using domain specific models.

To show you how this work let’s create a kind of “hello world” example. An end-to-end solution is out-of-scope of this post. Instead I will create some hands-on tutorial to demonstrate some of the functionalities.

Scenario description

Let’s consider the following simplified use case.

I like farming, even when I am really a rookie on this topic. Let’s suppose we have a parcel in a farm. The parcel has a soil. There are also different product types in every soil.

Soil quality is an extensive topic and it could be measured using a set of physical, chemical and biological properties. One of them is the soil PH. Suppose we have one or more devices able to measure the soil PH and send the measured values to a local gateway, which transmit them to our digital twin instance in Azure. For more information about soil quality please visit this document:

https://ag.tennessee.edu/biodegradablemulch/Documents/What_is_Soil_Quality_Aug5_2015.pdf

Use Case Diagram

In the first video I only show you how to use the Azure Digital Twins Explorer. The use case is just a reference and I hope it makes a little bit of sense.

Prerequisites

Create a digital twin instance

https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance

Create an App registration in Azure Active Directory

https://docs.microsoft.com/en-us/azure/digital-twins/how-to-authenticate-client

From the register app we would need the Application (client) ID and Directory (tenant) ID. Since we are going to use an OAuth 2.0 authorization code flow. To learn more about authentication

flows please visit this article: https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-flows-app-scenarios

Azure Registered App in Azure Active Directory
Registered App

The next step is to grant the register app the permissions to interact with the digital twins service instance. There are two roles for that in the current preview version, “Azure Digital Twins Owner” and “Azure Digital Twins Reader”. We will use the owner role in this example.

Add Role assignment for the registered app in the ADT service instance
Add Role assignment for the registered app in the ADT service instance

DTDL models

In order to model the data I will used the FIWARE Agrifood smart data models as a starting point: https://github.com/smart-data-models/dataModel.Agrifood

I also created a super class called “Thing” in order to demonstrate inheritance in DTDL.

The created models are available in my github:

https://github.com/pauldj54/adt-agrifood

Model diagram:

DTDL Class Diagramm
DTDL Class Diagramm

ADT Explorer

The Azure Digital Twins (ADT) Explorer is an open source tool that allows model management, instance creation, relationship creation, graph visualization and run queries again our ADT instance. It can be download here: https://github.com/Azure-Samples/digital-twins-explorer/tree/master/

In the video I will show how to:

  • Upload models
  • Create Instances
  • Create Relationships
  • Executing some queries
Posted in azure | Tagged , , , , , , | 3 Comments

Streaming Technologies Comparison

After several time I decided to share my notes about comparing different open source streaming technologies on LinkedIn Streaming Technologies Comparison
https://www.linkedin.com/pulse/streaming-technologies-comparison-paul-hernandez

Posted in Uncategorized | Leave a comment

Installing Apache Zeppelin 0.7.3 in HDP 2.5.3 with Spark and Spark2 Interpreters

Background

As a recent client requirement I needed to propose a solution in order to add spark2 as interpreter to zeppelin in HDP (Hortonworks Data Platform) 2.5.3
The first hurdle is, HDP 2.5.3 comes with zeppelin 0.6.0 which does not support spark2, which was included as a technical preview. Upgrade the HDP version was not an option due to the effort and platform availability. At the end I found in the HCC (Hortonworks Community Connection) a solution, which involves installing a standalone zeppelin which does not affect the Ambari managed zeppelin delivered with HDP 2.5.3.
I want to share how I did it with you.

Preliminary steps

Stop current Zeppelin: version 0.6.0 comes with HDP 2.5.3

su zeppelin
 /usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh stop

Deactivate script that starts this version by a system reboot
Zeppelin is started as an Ambari dependency in the script

 usr/lib/hue/tools/start_deps.mf

In order to avoid a modification in this file a custom init script could be crated to stop the default HDP Zeppelin and start the newer version

Apache Zeppelin Installation

Download Zeppelin: https://zeppelin.apache.org/download.html
Copy the .tar file tot he /tmp directory using WinSCP
Extract the .tar file in the target directory, i.e. opt

tar –xvf zeppelin-0.7.3-bin-all.tar -C /opt

Create a symlink to the last version (optional)

sudo ln –s zeppelin-0.7.3-bin-all/ zeppelin

Change the ownership of the folder

chown –R zeppelin:zeppelin /opt/zeppelin

Zeppelin Configuration

First copy the „conf“ directory from the existing zeppelin installation to the new version:

sudo yes | cp -rf /usr/hdp/current/zeppelin-server/conf/ /opt/zeppelin

In order to configure zeppelin to work with spark and spark2 client, the SPARK_HOME content needs to bind by the interpreter and comment out in the zeppelin-env.sh configuration file:
/opt/zeppelin/conf/zeppelin-env.sh

edit zeppelin-env

zeppelin-env.sh

According to the documentation, the variable ZEPPELIN_JAVA_OPTS changed in spark2 to ZEPPELIN_INTP_JAVA_OPTS. Since both versions are active these two variables are defined:

export ZEPPELIN_JAVA_OPTS=“-Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default”

export ZEPPELIN_INTP_JAVA_OPTS=“-Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default”

Start zeppelin 0.7.3

su zeppelin
/opt/zeppelin/bin/zeppelin-daemon.sh start

A pending issue here is to modifiy the startup scripts in order to persist the changes by a system reboot.

Configuring the spark interpreters

Navigate to the interpreter settings page:

interpreter menu

Open Interpreter Menu

Scroll-down to the spark interpreter and add the property:

SPARK_HOME = /usr/hdp/current/spark-client

add property spark interpreter

Add SPARK_HOME property to the spark interpreter

Create a new interpreter with interpreter group spark and name it spark2

Add new interpreter

create new interpreter

Create a new interpreter

Interpreter name and group (leave all other values as default)

create spark2 interpreter

Set interpreter name and group

Add the property:

SPARK_HOME = /usr/hdp/current/spark2-client

add property spark2 interpreter

Add SPARK_HOME property to the spark2 interpreter

Installation test

In order to test the installation create a new notebook and verify the binding of the interpreters

interpreter binding

Interpreter binding for the test notebook

Execute the following code in two different paragraphs:

%spark

sc.version
%spark2

sc.version

spark2 test

Test notebook

References

Posted in Analytics, hadoop, Spark | Tagged | 2 Comments

Talend job to lookup geographic coordinates into a shape file

Introduction

Recently for an open data integration project I had to select some tools in order to be able to process geospatial data. I had a couple of choices: I could use R and try to work out a solution with the packages available on the server or use Talend. One of the biggest restrictions was, the development environment had no internet connection due to security policies and I wanted to try some options iteractively. I decided to give Talend a try and asked the system admins to install the spatial plugin. I only had tried Talend before to accomplish some exercises from the book Talend for Big Data but never used it for a “real-world” project, which was challenging but also made me feel motivated.

Software requirements

Talend open studio for big data

https://www.talend.com/download/

Spatial extension for Talend

https://talend-spatial.github.io/

The experiment

Input data

Customers coordinates: a flat file containing x,y coordinates for every customer.

Municipalities in Austria: a shape file with multi-polygons defining the municipalities areas in Austria: source

Goal

Use the x,y coordinates from the customers to “look-up” the municipality code GKZ in the shape file, which in german stand for “Gemeindekennzahl”. The idea is to determine in which municipality lies every point (customer location).

This is an overview of the overall Talend job

jobOverview

Figure 1. Talend Job Overview

Create a generic schema

crateSchema.jpg

Figure 2. Create a generic schema

Use a sShapeFileInput component

Shapefile Input.JPG

Figure 3. Shape file input

The shapefile contains multipolygons and I want to have polygons. My solution was to use an sSimplify component. I used the default settings. You may need to analyze or find in the source metadata what kind of data is available within the shape file.

The projection of the shapefile was “MGI / Austria Lambert” which corresponds to EPSG 31287. I want to re-project it as EPSG 4326 (GCS_WGS_1984) which is the one used by my input coordinates.

sProj

Figure 4. Re-project the polygons

I read the x, y coordinates from a csv file.

With a s2DPointReplacer I converted the x,y coordinates as Point(x,y) (WKT: well-known text)

PointReplacer

Figure 5. Point replacer

Finally I created an expression in a tMap just to get the polygon and point intersection. The “contains” function would also work:

tmap

Figure 6. Calculate the intersection between the input geometries

Conclusion

Talend did the job and I recommend it as an alternative not only for classical ETL projects but also to create analytical data sets to be consumed by data scientists. Sometimes data cleansing (or data munging/wrangling, or whatever you want to call it) could be cumbersome with scripting languages. With Talend the jobs are easy to understand, could be fully parameterized and reused.

References

Posted in Business Intelligence, Geospatial data, Open Data, Talend | Tagged , , , , | 3 Comments

Connect to Hive using Teradata Studio 16

Introduction

Teradata Studio is the client used to perform database administration task on Aster and Teradata databases, as well as moving data from and to Hadoop. Recently I was asked to test a solution to integrate Hadoop with Teradata in order to build a modern Data Warehouse architecture, this was my first step and I want to share it with you.

Teradata Studio Download

1. Download Teradata Studio 16 following this link:

Teradata Studio Download

2. Open Teradata studio

Teradata Studio has three different profiles

  • Administration
  • Query Development
  • Data Transfer

3. Change to the query development profile for this quick demo

change-profile

Change profile view

Create a new Hadoop connection profile

1. Click on the New Connection Profile button as shown in the figure

create-a-new-connection-profile

Create a new connection profile

Depending on the Hadoop distribution used you have to change the following parameters.

I tested it with Hortonworks HDP 2.5 with hive.server2.transport.mode = http

2. Select Hadoop as profile type and give it a name. Click on Next:

create-a-hadoop-profile

Create a Hadoop profile

3. Select Hortonworks and Hive connection service

Hive connection service.png

Hive connection service

4. Set WebHCat Connection Properties and test the connection. Click on Next:

webhcat-connection-properties

WebHCat connection properties

Again, I used the host name and credentials from my environment.

5. Set the JDBC connection properties. In my case I used the foodmart sample database.

jdbc-connection-properties

JDBC connection properties

Test the connection

If everything was properly set in the previous steps you should be able to see your databases in the Data Source Explorer:

databases-available

Data source explorer

 

Open a SQL Editor and execute a query. I used this sample query:

select c.country,  s.store_Type, sum(unit_sales) as sum_unit_sales
from sales_fact_dec_1998 as f
inner join customer as c
on c.customer_id = f.customer_id
inner join store as s
on s.store_id = f.store_id
group by c.country,  s.store_Type
order by c.country,  s.store_Type

Result Set

resultset

Result set

And that’s it. I hope you find it useful.

Posted in Big Data, hadoop, Teradata | Tagged , , , , | 10 Comments

Teradata Express 15.10 Installation using Oracle VirtualBox

Introduction

For professional reasons I needed to start learning Teradata after some years of intensive Microsoft BI projects. To start breaking the ice and have a playground to test everything I want, I decided to download the newest Teradata Express virtual machine (TDE), which comes with the 15.10 engine plus some additional tools. In my current company I am not able to use VMware (for some dark reasons) and I am only allowed to use Oracle VirtualBox. I would like to share the steps I followed with you.

1.  Download Teradata Express 15

The latest virtual machine could be downloaded from: http://downloads.teradata.com/download/database/teradata-express-for-vmware-player

The image is only available for VMware and an account is required to download it.

2.  Create a new Virtual Box Machine

Open Oracle Virtual Box

  • Click on “New”
  • Enter a name for the new machine
  • Select Linux as the type
  • The openSUSE (64-bit) is the most similar Linux Vesion.

 

create-virtual-machine

Create a new VM

  • Depending on your local resources assign a memory size (greater than 1GB)

create-virtual-machine2

Set the RAM memory

  • Do not add a hard disc and click on “Create”

 

create-virtual-machine3

Do not add a virtual hard disk

  • On the Warning pop-up click on “Continue”

 

create-virtual-machine4

Ignore the warning

 

  • Select the created VM, click on “Settings” and go to the “Storage” section:

 

create-virtual-machine5

Storage settings

VMWare image comes with SATA hard disks. Oracle Virtual Box needs SCSI Controller for the Teradata Express machine.

  • Delete SATA Controller

create-virtual-machine6

Delete SATA controller

  • Add SCSI Controller

create-virtual-machine7

Add SCSI controller

  • Add a hard disk

create-virtual-machine8

Add hard disk

  • Choose existing disk
  •  Go to the location where you extracted the TDExpress15.10…………disk1.vmdk file and selected

 

create-virtual-machine9

Select virtual hard disk file

  • Repeate the previous step for the disks 2 and 3
  • Go to the “System” section and in the “Acceleration” Tab select “Hyper-V” as the paravirtualization interface

 

create-virtual-machine10

Adjust virtualization settings

  • Click “Ok” and close the VM settings.
  • Click on “Start” to run the VM

3.  Start and log into the VM

Start the VM. The first screen you should see is the following

Default login and password is root

logintothemachine1

Start virtual machine

  • Select the highlighted option

The first time you start the machine the Gnome Interface is not started. You should see a login screen similar to this:

logintothemachine2

Log into the virtual machine in console mode

  • In order to fix it login and issue the following commands:
vmware-uninstall-tools.pl
mv /etc/X11/xorg.conf /etc/X11/xorg.conf.vmware
reboot

If everything was properly set in the previous step you should be able to see a similar login screen:

logintothemachine4

Log into the virtual machine – Gnome

4.  Add the Virtual Box Linux Guest Additions

  • Mount the ISO image of the guest additions by clicking onDevices menu -> CD/DVD devices and point to the GuestAdditions ISO file. The Guest additions is available in Program Files\Oracle\VirtualBox folder
  • Open a terminal and excute the following commands:
cd /media
mkdir vbox
sudo mount /dev/sr0 vbox/
cd vbox/
./VBoxLinuxAdditions.run
reboot

add-guest-additionals

Install VBox additions

5.  Test the Teradada Installation

  • Open the Teradate Studio Express (The icon is available on the Desktop)
  • Right click on “Database Connections” –> New…

test-installation-1

Create new connection

  • Select “Teradata Database” and give it a name

test-installation-2

Teradata Database connection profile

  • Connection Details:
    • Database Server Name: 127.0.0.1
    • User Name: dbc
    • Password: dbc
    • Use the default values for the other fields
  • Click on “Test Conection”
  • Click on “Finish”

test-installation-3

Test created connection

  • ENJOY!!!

References

Migrating from VMware to VirtualBox (Part 1): Oracle Enterprise Linux

Teradata Express 14.0 for VMware User Guide

Teradata Express Edition 14.10 converting from VMWare to VirtualBox

Posted in Business Intelligence, Teradata, VirtualBox | Tagged , , , , , , | 23 Comments

Apache Zeppelin installation on Windows 10

Disclaimer: I am not a Windows or Microsoft fan, but I am a frequent Windows user and it’s the most common OS I found in the Enterprise everywhere. Therefore, I decided to try Apache Zeppelin on my Windows 10 laptop and share my experience with you. The behavior should be similar in other operating systems.

Introduction

It is not a secret that Apache Spark became a reference as a powerful cluster computing framework, especially useful for machine learning applications and big data processing. Applications could be written in several languages as Java, Scala, Python or R. Apache Zeppelin is a Web-based tool that tries to cover according to the official project Website all of our needs (Apache Zeppelin):

  • Data ingestion
  • Data discovery
  • Data analytics
  • Data visualization and collaboration

The interpreter concept is what makes Zeppelin powerful, because you can theoretically plug in any language/data-processing-backend. It provides built-in Spark integration, and that is what I have tested first.

Apache Zeppelin Download

You can download the latest release from this link: download

I downloaded the version 0.6.2 binary package with all interpreters.

Since this version, the Spark interpreter is compatible with Spark 2.0 and Scala 2.11

According to the documentation, it supports Oracle JDK 1.7 (I guess it should work with 1.8) and Mac OSX, Ubuntu 14.4, CentOS 6.X and Windows 7 pro SP1 (And according to my tests also with Windows 10 Home).

Too much bla bla bla, let’s get started.

Zeppelin Installation

After download open the file (I used 7 Zip) and extract it to a proper location (in my case just the c drive to avoid possible problems)

Set the JAVA_HOME system variable to your JDK bin folder.

Set the variable HADOOP_HOME to your Hadoop folder location. If you don’t have the HADOOP binaries you can download my binaries from here: Hadoop-2.7.1

system-variables

My system variables

I am not really sure why Hadoop is needed if Zeppelin supposed to be autonomous but I guess Spark looks for the winutils.exe if you are using Windows. I posted about it in my previous post: Apache Spark Installation on Windows 10

This is the error I found in the Zeppelin logs (ZEPPELIN_DIR\logs –> there is a file for the server log and a separated file for each interpreter):

winutils error.JPG

winutils.exe error

Zeppelin Configuration

There are several settings you can adjust. Basically, there are two main files in the ZEPPELIN_DIR\conf :

  • zeppelin-env
  • zeppelin-site.xml

In the first one you can configure some interpreter settings. In the second more aspects related to the Website, like for instance, the Zeppelin server port (I am using the 8080 but most probably yours is already used by another application)

If you don’t touch the zeppelin-env file, Zeppelin use the built-in Spark version, which it has been used for the results posted in this entry.

Start Zeppelin

Open a command prompt and start Zeppelin executing the zeppelin.cmd in Drive:\ZEPELLIN_DIR\bin\zeppelin.cmd

start-zeppelin

Start Zeppelin

Then, open your favorite browser and navigate to localhost:8080 (or the one you set in the zeppelin-site.xml)

You should see the starting page. Verify that the indicator in the top-right-side of the windows is green, otherwise your server is down or is not running properly)

zeppelin home.JPG

Zeppelin home

If you have not configured Hive, before start trying the tutorials included in the release, you should need to set the value of the zeppelin.spark.useHiveContext to false. Apart from the config files, Zeppelin has an interpreter configuration page. You can find it by clicking on your user “anonymous” –> Interpreter

interpreter-config

Go to interpreter settings

Scroll-down to the bottom where you’ll find the Spark config values:

spark interpreter properties.JPG

Spark interpreter settings

Press on the edit button and change the value to false in order to use the SQL context instead of Hive.

Press the Save button to persist the change:

hive-content-set-to-false

Set zeppelin.spark.useHiveContext to false

Now let’s try the Zeppelin Tutorial

From the Notebook menu click on the Zeppelin Tutorial link:

zeppelin-tutorial

Navigate to the Zeppelin Tutorial

The first time you open it, Zeppelin ask you to set the Interpreter bindings:

interpreter bindings 1.JPG

Interpreter binding

Just scroll-down and save them:

interpreter-bindings-2

Save biding

Some notes are presented with different layouts. For more about the display system visit the documentation online.

Other possible annoying error

I was getting the following error when tried to run some notes in the Zeppelin Tutorial:

spark-warehouse folder 2.JPG

Spark warehouse URI error

I found a suggested solution in the following stack overflow question: link

An URI syntax exception trying to find the folder spark-warehouse in the Zeppelin folder. I struggled a little bit with that. The folder was not created in my Zeppelin directory, I thought it was a permissions problem, so I created it manually and assigned 777 permissions.

spark-warehouse-folder

spark-warehouse folder permission settings

It still failed. In the link above a forum user suggested to use triple slashes to define the proper path file:///C:/zeppelin-0.6.2-bin-all/spark-warehouse

But I still don’t know where to place this configuration. I couldn´t do it in the spark shell, also not while creating a spark session (zeppelin does it for me) and the conf/spark-defaults.conf doesn´t seem to be a good idea for Zeppelin because I was using the spark built-in version.

Finally, I remembered that is possible to add additional spark setting in the interpreter configuration page and I just navigated there and created it:

warehouse-dir

spark.sql.warehouse.dir

Just as additional info, you can verify the settings saved in this page in the file Drive:\ZEPELLIN_DIR\conf\interpreter.json

spark-warehouse folder 3.JPG

interpreter.json

After these steps, I was able to run all of the notes from the Zeppelin tutorials.

running-notes-zeppelin-tutorial

Running the load data into table note

Note that the layout from the tutorial is telling you more or less the order in which you have to execute the notes. The note “Load data into table” must be executed before you play the notes below. I guess that is the reason it spans over the whole width of the page, because it must be executed before you visualize or analyze the data, while the notes below could be executed in parallel, or in any order. I mean, this layout is not a must but it helps to keep an execution order.

note reults.JPG

Visualizing data with Zeppelin

I hope this helps you on your way to learn Zeppelin!

Posted in Analytics, data visualization, R, Spark | Tagged , , , , | 23 Comments