Building Docker image with Azure IoT Edge cross-compiled libraries for Raspberry Pi

TL;DR – In my last article, I wrote about the steps it takes to build a Docker image with cross-compiled native libraries for arm-hf/arm32/Raspbian/RaspberryPi and .NET Core 2.0.0 DLLs compiled for linux-arm. However it takes too many manual steps upon running your own container. A better practice is to build a Dockerfile which you can download.

How to get started?

  1. Run this on your dev machine, not your target Raspberry Pi. This is because the Docker image is based upon Debian 8 (jessie) x86-64 GNU/Linux. This is the environment needed to run the RPiToolchain as well as the .NET Core 2.0.0 SDK.
  2. Git clone my repo from here.
  3. Build the Docker image by the following command:
docker build -t iot-edge-rpi .

As a result of the cross-compilation and .NET Core 2.0.0 compilation, you get a tar ball which you can copy onto your target Raspberry Pi. The cleanest way to do so is by mounting a host directory onto your container before running it. Then copy the iot-edge-rpi.tar.gz to /mnt.

docker run -v /home/username:/mnt --name iot-edge-rpi -it iot-edge-rpi

Copy iot-edge-rpi.tar.gz to your target Raspberry Pi. Easiest way is to use SCP.

docker build -t iot-edge-rpi .

Installing Ubuntu Core and Ubuntu 16.04 on Intel NUC

I have an Intel® NUC Kit DE3815TYKHE and I finally got some time to reinstall the OS on it. I’d also installed an SSD drive which I had lying on the top shelve of my study room. My intention is to install Ubuntu Core on the eMMC and Ubuntu 16.04 on the SSD. Ubuntu Core to try out the snap packages including the snap package for Azure IoT Edge. Ubuntu 16.04 so that I have a local dev environment instead of having to constantly spin up my Linux VM in Azure just to try things out.

  1. Upgrade BIOS. At the time of writing this post, the latest BIOS update for this NUC is version 0060.
  2. Follow the steps of flashing Ubuntu Core on the Intel NUC at the Ubuntu developer page.
  3. Instead of a standard Ubuntu image, I used the Linux Lite 64bit image. Follow the instructions here. This is based on an Ubuntu 16.04 xenial image.
  4. Installing the grub bootloader failed for me. Instead I manually installed grub on my Linux Lite drive.
  5. Open up a shell window. Credit goes to the person who posted on this forum thread. It works like a charm. [Note: anywhere you see “XY” or “X”, change that to the correct drive letter (“X”) and partition number (“Y”) for your LL root partition. To list your partitions, just run lsblk command]
    sudo mount /dev/sdXY /mnt
    for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$i; done
    sudo chroot /mnt
    grub-install /dev/sdX
    update-grub
    exit
    for i in /sys /proc /dev/pts /dev; do sudo umount /mnt$i; done
    sudo umount /mnt
  6. I turned off UEFI boot, and just stuck to Legacy boot in the BIOS. Works for me.

Running IoT Edge on Raspbian/arm32/arm-hf

I reckon that I’ll always try to start my technical articles with the following tl;dr to introduce a summary of a lengthy post.

TL;DR – Azure IoT Edge is a project which enables edge processing and analytics in IoT solutions. The modules within the IoT Edge gateway can be written in different programing languages (native C, as well as different module language bindings available such as Node.js, .NET, .NET Core and Java) and can run on platforms such as Windows, and various Linux distros. As part of what I do in my day job, I work with customers and partners as they build their edge modules. One of the key asks is to be able to write modules in .NET Core and deploy on Raspberry Pi due to its ease of use and popularity for PoC and prototyping purposes. This article explains how to run modules written for .NET Core within the same Azure IoT Edge framework.

This post is timely as the .NET Core engineering team just announced less than a week ago that the .NET Core Runtime ARM32 builds is now available. More details about this announcement available here. Please do take note that these builds are community supported, not yet supported by Microsoft and have a preview status. For prototyping purposes, I wouldn’t complain too much about this. My plan was to get the existing modules for .NET Core cross-compiled on my dev machine for linux-arm as well as .NET Core runtime 2.0.0. This cannot be performed on Raspbian because only .NET Core runtime is available, not SDK, and there are native runtime shared libraries for Raspbian which must be cross-compiled. I will demonstrate how you could pull a Docker container with the right cmake toolchain to cross-compile for Raspbian,

According to the documentation in .NET Core Sample for Azure IoT Edge, the current version of the .NET Core binding and sample modules were tested and written in .NET Core v1.1.1. However I’d also tested that this works with .NET Core 2.0.0.

Cross-compiling Azure IoT Edge native runtime for ARM32

If you do a search in NuGet, you will find that there are a number of NuGet packages for Azure IoT Edge which contain native runtime libraries for a number of platforms namely Windows x64, Ubuntu 16.04 LTS x64, Debian 8 x64 and .NET Standard. How about for Raspbian which is a flavour of Debian 8 on ARM32? Instead of waiting for this to be released, you cross-compile the runtime libraries. After all this is one of the benefits of the open-source nature of Azure IoT Edge project.

Within the Azure IoT Edge repo jenkins folder, we have a build script just for cross-compiling for Raspberry Pi. This script is called raspberrypi_c.sh and it will spit out an output which is the cmake toolchain file called toolchain-rpi.cmake. To make things simpler, the Azure IoT engineering team had created a Docker image. However there is no guarantee that this image will be kept on Docker Hub at all times. Now that it is, you can find it here. There are heaps of other Docker images available as well.

In your dev machine which has Docker daemon installed (not your Raspberry Pi), pull down the image by:

docker pull aziotbld/raspberrypi-c

Then run the Docker container:

docker run -it aziotbld/raspberrypi-c

Inside the Docker container do the following steps:

  1. Update and Install all the pre-requisite libraries.
apt-get update 

apt-get install -y libunwind8 libunwind8-dev gettext libicu-dev liblttng-ust-dev libcurl4-openssl-dev libssl-dev uuid-dev apt-transport-https

2. Install .NET Core SDK on this Docker container. The image contains Ubuntu 14.04.

curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg

mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg 

sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-trusty-prod trusty main" > /etc/apt/sources.list.d/dotnetdev.list'

apt-get update

apt-get install dotnet-sdk-2.0.0

3. Git clone the Azure IoT Edge repository.

git clone https://github.com/Azure/iot-edge.git4

4. Target .NET Core 2.0.0 for Azure IoT Edge .NET Core binding and modules. To do so modify all of the .NET Core modules’ csproj files by doing the following:

i. Change <TargetFramework> from “netstandard1.3” to netcoreapp2.0 so that it looks like the following:

    <TargetFramework>netcoreapp2.0</TargetFramework>

Note: .NET Standard is a specification for .NET APIs which form the base class libraries (BCL). In the original, specifying .NET Standard 1.3 means that .NET Core 1.0 implements .NET Standard 1.3, which also means that it exposes all APIs defined in .NET Standard versions 1.0 through 1.3. More information about this here.

ii. Comment <NetStandardImplicitPackageVersion> like the line listed here.

You can now publish specifically for the linux-arm runtime. In the shell script for tools/build_dotnet_core.sh, add a -r flag to the dotnet commands. E.g., in the lines for

dotnet restore and dotnet build, after adding the -r flag, the lines look like the following:

dotnet restore -r linux-arm $project
dotnet build -r linux-arm $build_config $project

5. For some reasons the Test nuget packages were missing, I trust everything is tested fine, so I will comment the last few lines of my build_dotnet_core.sh script.

#for project in $projects_to_test
#do
#    dotnet test $project
#    [ $? -eq 0 ] || exit $?
#done

6. Run

cd iot-edge
chmod 755 ./tools/build_dotnet_core.sh
./tools/build_dotnet_core.sh

7. Now it is time to cross-compile native runtime library for Raspbian 8 ARM32. Create a symbolic link to the RPiTools in /home/jenkins because the raspberrypi_c.sh script expects the RPiTools in the home directory which for the root user is the following:

ln -sd /home/jenkins/RPiTools/ /root/RPiTools

8. Run

chmod ./jenkins/raspberrypi_c.sh
./jenkins/raspberrypi_c.sh

Note: If you encounter cmake errors, just delete the install-dep directory and re-run the shell script above.

This creates a toolchain file at ./toolchain-rpi.cmake

9. Now is time to build Azure IoT Edge with .NET Core binding targeting 2.0.0 along with the cmake toolchain file.

./tools/build.sh --disable-native-remote-modules --enable-dotnet-core-binding --toolchain-file ./toolchain-rpi.cmake

10. All the native runtime libraries required to run your gateway on Raspbian and targeting .NET Core runtime 2.0.0 are in the following directories:

iot-edge/install-deps/lib/*.so
iot-edge/build/bindings/dotnetcore/libdotnetcore.so
iot-edge/build/samples/dotnet_core_module_sample/ - *.so and *.dll
iot-edge/build/modules/ - *.so and *.dll

11. Now it is worthwhile to commit the changes you have made in the Docker container in a new image which is properly tagged/labelled. You should also copy the entire iot-edge folder in a tar ball out of this Docker container, and move it to your Raspberry Pi device. Steps required to do this is outside of this tutorial.

12. You can check out the Azure IoT Edge samples from its GitHub repo.

13. I tried out this simulated_ble sample within the dotnetcore folder. Please make sure that you have .NET Core 2.0.0 runtime installed on your Raspberry Pi device. I added this within the a loaders section in my gw-config.json file. Actually I’m not too sure if this is how it works in Linux, but I was paranoid anyhow. The right way is to export the directory in the LD_LIBRARY_PATH environment variable, I think. 🙂

    "loaders": [
        {
            "type": "dotnetcore",
            "name": "dotnetcore",
            "configuration": {
                "binding.path": "/home/pi/iot-edge/build/bindings/dotnetcore/libdotnetcore.so",
                "binding.coreclrpath": "/opt/dotnet-2.0.0/shared/Microsoft.NETCore.App/2.0.0/libcoreclr.so",
                "binding.trustedplatformassemblieslocation": "/opt/dotnet-2.0.0/shared/Microsoft.NETCore.App/2.0.0/"
            }
        }
    ],

14. To be even more paranoid, I copied all the native runtime libraries (*.so) and DLLs for the .NET Core binding modules into my execution folder. I also copied and rename a native gateway host as gw and place it within the same folder.

15. The result…..

pi@raspberryfai:~/iot-edge-samples/dotnetcore/simulated_ble/src/bin/Debug/netcoreapp2.0$ ./gw gw-config.json
Gateway is running.
Device: 01:02:03:03:02:01, Temperature: 10.00
Device: 01:02:03:03:02:01, Temperature: 11.00
Device: 01:02:03:03:02:01, Temperature: 12.00
Device: 01:02:03:03:02:01, Temperature: 13.00
Device: 01:02:03:03:02:01, Temperature: 14.00

VOILA!

Under the hood of a Connected Factory

TL;DR – Ingesting telemetry data is nothing new in the industrial IoT world. Typically captured data are stored in a historian. However not all “historised tags” are stored in the historian. All there to it then is some data infrastructure on-premises. In order to do advanced analytics, leading to machine learning, leading to AI, the first step is to ingest telemetry from a larger variety of data sources in the cloud which leads to interesting stream processing and analytics in the cloud. This post talks about the guts of a connected factory, and how to bridge with existing components and systems in a connected factory.

I posted this on LinkedIn 3 months ago. The setup was for the Azure IoT Suite Connected Factory pre-configured solution. I have been meaning to publish this, now’s the time. The integral parts of a connected factory are connected telemetry stations, of which OPC UA is the standard for industrial IoT machines and systems in your plant floor.

This post is about streaming the telemetry data from SCADA systems or MES to the cloud, with Azure IoT Hub being the cloud gateway for ingestion and for maintaining the digital twins of these physical systems. The component which allows this integration is the OPC UA Publisher for Azure IoT Edge.

This reference implementation demonstrates how Azure IoT Edge can be used to connect to existing OPC UA servers and publishes JSON encoded telemetry data from these servers in OPC UA “Pub/Sub” format (using a JSON payload) to Azure IoT Hub. All transport protocols supported by Azure IoT Edge can be used, i.e. HTTPS, AMQP and MQTT (the default).

The target deployment environment is a Process Control Network (PCN) in which the target OPC UA Server lives. My target environment is made out of Windows Server 2016 virtual machines in an on-premises data centre. Azure IoT Edge modules are packaged into a Docker container. The current requirement for Docker images on Windows is that it has to be a Windows image. Prior to this the Dockerfile recipe for building a Docker image and running the container is for Linux, which is great for many purposes, except for my target PCN environment. I made a pull request in the GitHub repo for OPC UA Publisher with my contribution for a Dockerfile.Windows which uses a Windows NanoServer image with the right version of .NET Core upon which this project depended. The pull request was accepted and merged by the engineering team behind this project and they improved the recipe based on the new Azure IoT Edge architecture. All within the spirit of open source and making contributions back into the community.

To test this out, I

  • Created a Windows Server 2016 VM in Azure with Docker extension enabled
  • Git cloned the repo
  • docker build -t gw -f Dockerfile.Windows .
  • docker run -it –rm gw <applicationName> <IoTHubOwnerConnectionString>

Note: It is not a requirement to run the OPC UA Publisher for Azure IoT Edge within a Docker container. However doing so makes it easier to deploy your IoT Edge modules on your field gateway. It also allows you to perform orchestration of your containers thereafter. I do know that certain industrial IoT vendors who are deploying the bits directly onto their specialised hardware without the need for a Docker container.

Note: If you are using Zscaler, during the process of building your Docker image, if you encounter issues with dotnet restore, this is likely due to a certificate trust issue between the Docker container and Zscaler. Just alter the Dockerfile to add the Zscaler certificate to the Docker containers Trusted Root certificates and that would fix that error.

Once you got the Docker container running with the OPC UA Publisher for Azure IoT Edge, what next? The next logical step is to publish your OPC UA Server nodes. You can add more nodes to the publishednodes.json after running the OPC UA Publisher module. To do so you need to use an OPC UA client to connect to the Azure IoT Edge OPC UA Publisher module on its exposed endpoint on port 62222, and publish a node.

You could expose this port when you run the Docker container by adding the following arguments when you run Docker on the CLI:

docker run -it --rm --expose 62222 -p 62222:62222 gw <applicationName> <IoTHubOwnerConnectionString>

If you have a simple OPC UA client you could use that to connect to this endpoint. I used the sample .NET Core Client and I could see the exposed OPC UA TCP endpoint allows you to add more nodes, create a subscription, etc. However this does not allow me to invoke the nodes, I reckon that I need a full .NET Application client to do so.

C:\UA-.NETStandardLibrary\SampleApplications\Samples\NetCoreConsoleClient>dotnet run opc.tcp://winozfactory:62222/UA/Publisher

A better way is to use the UA Sample Client and UA Sample Server from the OPC Foundation .NET Standard Library GitHub repo.

Using the UA Sample Client, I am able to connect to the Azure IoT Edge OPC UA Publisher endpoint of opc.tcp://winozfactory:62222/UA/Publisher.

Then go to Objects->PublisherInstance, and call the PublishNode item. You need to provide the NodeID and the ServerEndpointURL as arguments. In my case, I want to subscribe to the simulated value in my UA Sample Server, so I provided a node ID of ns=5;i=40, and server endpoint URL of opc.tcp://winozfactory:51210/UA/SampleServer

Voila! Publishednodes.json was updated by the OPC UA Publisher without restart.

To prove that telemetry is ingested in Azure IoT Hub, use Device Explorer. Monitor the device which you used in the connection string when you started the OPA UA Publisher, you will see telemetry serialised into JSON.

Once the telemetry stream lands in Azure IoT Hub, the sky’s the limit literally as your data-in-motion, data-at-rest are both in the cloud. The next step is to hook this up to Azure Time Series Insights, besides many other options you have as part of implementing Lambda architecture using Azure 1st party or 3rd party services. We continue to add more features in Time Series Insights such as root cause analysis and time exploration updates. Read the article here, but here’s an excerpt:

“We’ve heard a lot of feedback from our manufacturing, and oil and gas customers that they are using Time Series Insights to help them conduct root cause analysis and investigations, but it’s been difficult for them to quickly pinpoint statistically significant patterns in their data. To make this process more efficient, we’ve added a feature that proactively surfaces the most statistically significant patterns in a selected data region. This relieves users from having to look at thousands of events to understand what patterns most warrant their time and energy. Further, we have made it easy to then jump directly into these statistically significant patterns to continue conducting an analysis.”

Bridging network adapters to share Internet connection with your RPi2/Windows 10 IOT Core

In my previous post, I shared about the workaround in order to share Internet connection via ICS when the option is disabled due to domain group policy. I learned that there is an easier option to share the Internet connection of your Wi-Fi adapter to devices connected to your Ethernet adapter, like a Raspberry Pi running Windows 10 IoT Core. Here are the steps:

  1. Open Network and Sharing Center.
  2. Change adapter settings.
  3. Select both your Wi-Fi and Ethernet adapter.
  4. Right-mouse click and select the option to bridge
  5. Make sure that the Internet Protocol Version 4 (TCP/IPv4) properties are set to “Obtain an IP address automatically”.
  6. In order to find your Windows 10 IoT Core device’s IP address, run Windows10IoTCoreWatcher. Windows10Io
7. Right-mouse click on the board item, and select “Copy IP address”.
8. Follow the PowerShell documentation here to use PowerShell to connect to your running device. You can also follow the instructions here to use SSH to connect to your device.
If this method fails, please fall back to the ICS setup workaround.

Windows 10 IoT Core / Raspbian on Raspberry Pi 2 using Windows 10’s Internet Connection Sharing (ICS)

You just got yourself a Raspberry Pi 2 (RPi 2). You could be running Raspbian or Windows 10 IoT Core. You don’t have access to a hub/switch/router to connect the RPi 2 for Internet connection. The next best solution is by connecting the RPi 2 to your PC via Ethernet and sharing your Wi-Fi’s internet connection via Internet Connection Sharing (ICS). When you go to the Wi-Fi adapter properties, you got some bad news:WiFi-ICS-disabled

What do you do? Here’s a workaround which is definitely NOT endorsed by your friendly network administrator, but it works. NOTE: This workaround is NOT permanent and it is not meant to flout your network administrator’s group policy because they are rules for good reasons; security, etc. 

  1. To enable sharing on the WiFi adapter, run the following command in a Command Prompt run as Administrator.
netsh wlan set hostednetwork mode=allow

 

  1. Run regedit. Go to Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Network Connections. Edit NC_ShowSharedAccessUI, and enter 1 in value data.
  1. Go back to Wi-Fi adapter properties, now you will see the Sharing tab. In case you don’t see the Sharing tab, this could be due to the reason that you have not connected your Ethernet adapter (for those that comes in a USB dongle). You need at least two network adapters to be present in order to do ICS. Check the box that says “Allow other network users to connect through this computer’s Internet connection”.

WiFi-ICS-avail

  1. Go to your Ethernet adapter properties. Check out the Internet Protocol Version 4 (TCP/IPv4) Properties. You will see the following preconfigured for you. Do not change these settings.

ethernet-ipv4settings

 

  1. Connect the network cable between your Raspberry Pi 2 and your Windows 10 machine via the Ethernet port.
  1. When you start up your Windows 10 IoT Core on your Raspberry Pi 2, you will see that the IP address is dynamically set to an IP address like 192.168.137.2. Voila, this means that you have Internet connection shared with your RPi 2.
RPi2-win10-dashboard

7. Follow the PowerShell documentation here to use PowerShell to connect to your running device. You can also follow the instructions here to use SSH to connect to your device.

From <http://ms-iot.github.io/content/en-US/win10/SetupRPI.htm>

  1. To make sure ICS is enabled properly, just ping any Internet site.

pinganysite

To start sending events from Windows 10 IoT Core to Azure IoT Hub:

In your Visual Studio 2015 UWP project, go to your project properties, and configure the remote machine IP.
vs2015-rpi2-props

Configure remote machine IP as 192.168.137.2, or any other IP address which you got from Step 6. Run your project.

Check Device Explorer for event has been received at the IOT Hub.deviceexplorer

Finally, a word of caution. If you don’t see ICS sharing available in your Wi-Fi adapter settings anymore, this is because the group policy has been re-applied to your machine. That’s ok, it’s meant to protect your machine after all. When you need to enable ICS for another instance, just re-do the steps above.

Azure API App – FTP Connector – How to solve “227 Entering passive mode error”

I really like the Azure Logic Apps. It reminds me of the good old days of workflows in WF except that this meant for simple workflow logic, but it does the trick. I particularly like the FTP Connector and the Azure BLOB connector. Due to the trigger function not yet implemented in the Azure BLOB connector, I found a workaround which was to use the FTP Connector, then use the Blob Connector as an action. But in this particular IoT scenario, it is hardly a workaround, it’s a necessity because the “thing” could only upload my payload in an FTP server or send an email with the payload as attachment. More about this IoT scenario I am working on in a later post.

For the past few days, I’d been stuck looking at this one error. When I clicked on the “Output Links” in my trigger history, this highly elusive message was shown:

“message”: “Unable to connect to the Server. The remote server returned an error: 227 Entering Passive Mode (104,43,19,174,193,60).\r\n.”

This is super weird because “227 Entering Passive Mode” is hardly an error, it’s a valid FTP status message for passive mode. So why is this an error? Before jumping into a conclusion that this is a bug in the FTP connector, I tried 3 different options for having a FTP server in Azure:
1. Azure websites
2. A Linux VM running vsftpd.
3. A Windows Server 2012 R2 Datacenter running FTP Server

I tried all of the above in that order. Only (1) worked, but my “thing” could not upload the event data into an Azure website FTP server. I wasn’t even going to try to fix what’s in my “thing” because this “thing” is pretty much a blackbox, if you will. It’s white actually but you get what I mean. So I tried options (2) and (3).

After trying out all kinds of configuration and creating incoming rules and Azure endpoints to allow traffic, it finally dawned upon me. If I want to make a VM instance available across a range of ports, I have to specify the ports by adding them as endpoints in my Azure management portal. However the easier solution is to enable an instance-level public IP address for that VM instance. With this configuration I can communicate directly with my VM instance using the public IP address. The advantage of doing so is that I can immediately enable a passive FTP server, because “passive” essentially means that it can choose ports dynamically.

ftpdataports
These port ranges can be huge, so I don’t think you would want to specify/add 1001 port numbers as endpoints on your VM instance. While it costs to have a instance-level public IP address, it’s well worth the money. Trust me.

Just remember to create an incoming rule for your firewall to allow your data channel port range.

What a relief when I finally saw the following in the FTP connector trigger history.

ftpconnectortriggersok

Potential solution to HTTP 500 Error with your WordPress site on Azure Websites

It’s been awhile since I blogged, and the embarrassing part about this is because my blog has been down and I haven’t really got the time to really troubleshoot. I did what anyone would do, searched online which pointed me to a few posts on MSDN and Stackoverflow but nothing really did it for me. So in case you found this post through the same searches, this could be the potential solution for you.

If you get a HTTP 500 error, and you should FTP into your Azure website deployment to figure out the exact error. Here’s where you can find your HTTP detailed errors:

awsdetailederror

 

Just click on the file according to the latest datetimestamp.

If your error looks like the following:

pythonfastcgimoduleerror

That means you have enabled Python in your Azure website, which you do not need because WordPress runs on PHP.

Go to your Azure website configuration and disable it like shown below:

python-off

 

This ought to fix it.

Using Azure Stream Analytics to tap into an Event Hub data stream

The pre-requisite is to make sure that you have requested for Stream Analytics preview if you have not already done so.

1. Create a Stream Analytics job. Jobs can only be created in 2 regions while the service is under preview.

sa-1

2. Add an input to your job., This is the best part because we get to tap into an Event Hub to analyse the continuous sequence of event data stream and potentially transform it by the same job.

sa-2-addinput

3. Select Event Hub.

sa-3-addeh4. Configure the event hub settings. You could also tap into an event hub from a different subscription.

sa-4-ehsettings

5. Due to my event hub’s event data being serialized in JSON format, this is exactly what I will select in the following step.

sa-5-serializationsetting

Under the Query tab, I just insert a simple query like the following.

SELECT * FROM getfityall 

I’m not doing any transformation yet, I just want to make sure that the event data sent by my Raspberry Pi via Event Hub’s REST API is done properly.

Next on the list of steps is to setup the output in the job.

6. Add an output to your job. I’m using a BLOB storage just to keep things simple so that I could use open the CSV file in Excel to take a look at the data stream.

sa-6-output

7. Setup the BLOB storage settings.

sa-7-blobsettings

8. Specify the serialization settings. I’m choosing CSV for obvious reason stated above.

sa-8-serialization

As I pump telemetry data from my Raspberry Pi, I could see my CSV file created/updated. Just go to the container view in your BLOB storage, and download the CSV file.

sa-9

Below is what my event data stream looks like.  It shows event data points captured from two Raspberry Pis, one using the MPL3115 temperature sensor (part of the Xtrinsic sensor board), and another using MCP9808 temperature sensor. The fun begins as I could write some funky transformation logic in the query and do some real-time complex event processing.

streamanalytics-temperature

Send accurate temperature data from Raspberry Pi to Azure Event Hub

This is a follow-up post to my previous one which is about sending accurate temperature data from the MCP9808 temperature sensor board to an Azure Event Hub. This is done through the MCP9808 Python library provided by Adafruit, and also one which I have repurposed from the Xtrinsic sensor board. This is the updated version. Inside the ~/Adafruit_Python_MCP9808/examples directory, I made a copy of the simpletest.py script as send2eventhub.py.

First step is to import eventhubms and socket.

import socket
import eventhubms

The parts which I had modified are the following:

# Loop printing measurements every second.
print 'Press Ctrl-C to quit.'
hubClient = eventhubms.EventHubClient()
parser = eventhubms.EventDataParser()
hostname = socket.gethostname()

while True:
temp = sensor.readTempC()
message = 'Temperature: {0:0.3F}'.format(temp)
body = parser.getMessage(message,"MCP9808")
print "\n"+body+""
hubStatus = hubClient.sendMessage(body,hostname)
print "[RPi-&gt;EventHub]\t[Data]"+message+"\t[Status]"+str(hubStatus)
time.sleep(1.0)

The event hub client is no different that the one I described in my previous post for sending data from the Xtrinsic sensor board. The event data is sent to the event hub via its REST API. It’s pretty simple.

Then to make sure that the event data is sent correctly and can be consumed from the event hub, I made use of Azure Stream Analytics which is the simplest way to set the event hub as an input. Otherwise you would have to write code to either directly receive events from the event hub or use the EventProcessorHost like how I used in my scalable event hub processor in an Azure worker role.

Since this is a separate topic by itself, I will be writing another post about how to create a new Stream Analytics job to add an event hub as the data stream from which events data will be consumed and transformed by the Stream Analytics job.