Under the hood of a Connected Factory

TL;DR – Ingesting telemetry data is nothing new in the industrial IoT world. Typically captured data are stored in a historian. However not all “historised tags” are stored in the historian. All there to it then is some data infrastructure on-premises. In order to do advanced analytics, leading to machine learning, leading to AI, the first step is to ingest telemetry from a larger variety of data sources in the cloud which leads to interesting stream processing and analytics in the cloud. This post talks about the guts of a connected factory, and how to bridge with existing components and systems in a connected factory.

I posted this on LinkedIn 3 months ago. The setup was for the Azure IoT Suite Connected Factory pre-configured solution. I have been meaning to publish this, now’s the time. The integral parts of a connected factory are connected telemetry stations, of which OPC UA is the standard for industrial IoT machines and systems in your plant floor.

This post is about streaming the telemetry data from SCADA systems or MES to the cloud, with Azure IoT Hub being the cloud gateway for ingestion and for maintaining the digital twins of these physical systems. The component which allows this integration is the OPC UA Publisher for Azure IoT Edge.

This reference implementation demonstrates how Azure IoT Edge can be used to connect to existing OPC UA servers and publishes JSON encoded telemetry data from these servers in OPC UA “Pub/Sub” format (using a JSON payload) to Azure IoT Hub. All transport protocols supported by Azure IoT Edge can be used, i.e. HTTPS, AMQP and MQTT (the default).

The target deployment environment is a Process Control Network (PCN) in which the target OPC UA Server lives. My target environment is made out of Windows Server 2016 virtual machines in an on-premises data centre. Azure IoT Edge modules are packaged into a Docker container. The current requirement for Docker images on Windows is that it has to be a Windows image. Prior to this the Dockerfile recipe for building a Docker image and running the container is for Linux, which is great for many purposes, except for my target PCN environment. I made a pull request in the GitHub repo for OPC UA Publisher with my contribution for a Dockerfile.Windows which uses a Windows NanoServer image with the right version of .NET Core upon which this project depended. The pull request was accepted and merged by the engineering team behind this project and they improved the recipe based on the new Azure IoT Edge architecture. All within the spirit of open source and making contributions back into the community.

To test this out, I

  • Created a Windows Server 2016 VM in Azure with Docker extension enabled
  • Git cloned the repo
  • docker build -t gw -f Dockerfile.Windows .
  • docker run -it –rm gw <applicationName> <IoTHubOwnerConnectionString>

Note: It is not a requirement to run the OPC UA Publisher for Azure IoT Edge within a Docker container. However doing so makes it easier to deploy your IoT Edge modules on your field gateway. It also allows you to perform orchestration of your containers thereafter. I do know that certain industrial IoT vendors who are deploying the bits directly onto their specialised hardware without the need for a Docker container.

Note: If you are using Zscaler, during the process of building your Docker image, if you encounter issues with dotnet restore, this is likely due to a certificate trust issue between the Docker container and Zscaler. Just alter the Dockerfile to add the Zscaler certificate to the Docker containers Trusted Root certificates and that would fix that error.

Once you got the Docker container running with the OPC UA Publisher for Azure IoT Edge, what next? The next logical step is to publish your OPC UA Server nodes. You can add more nodes to the publishednodes.json after running the OPC UA Publisher module. To do so you need to use an OPC UA client to connect to the Azure IoT Edge OPC UA Publisher module on its exposed endpoint on port 62222, and publish a node.

You could expose this port when you run the Docker container by adding the following arguments when you run Docker on the CLI:

docker run -it --rm --expose 62222 -p 62222:62222 gw <applicationName> <IoTHubOwnerConnectionString>

If you have a simple OPC UA client you could use that to connect to this endpoint. I used the sample .NET Core Client and I could see the exposed OPC UA TCP endpoint allows you to add more nodes, create a subscription, etc. However this does not allow me to invoke the nodes, I reckon that I need a full .NET Application client to do so.

C:\UA-.NETStandardLibrary\SampleApplications\Samples\NetCoreConsoleClient>dotnet run opc.tcp://winozfactory:62222/UA/Publisher

A better way is to use the UA Sample Client and UA Sample Server from the OPC Foundation .NET Standard Library GitHub repo.

Using the UA Sample Client, I am able to connect to the Azure IoT Edge OPC UA Publisher endpoint of opc.tcp://winozfactory:62222/UA/Publisher.

Then go to Objects->PublisherInstance, and call the PublishNode item. You need to provide the NodeID and the ServerEndpointURL as arguments. In my case, I want to subscribe to the simulated value in my UA Sample Server, so I provided a node ID of ns=5;i=40, and server endpoint URL of opc.tcp://winozfactory:51210/UA/SampleServer

Voila! Publishednodes.json was updated by the OPC UA Publisher without restart.

To prove that telemetry is ingested in Azure IoT Hub, use Device Explorer. Monitor the device which you used in the connection string when you started the OPA UA Publisher, you will see telemetry serialised into JSON.

Once the telemetry stream lands in Azure IoT Hub, the sky’s the limit literally as your data-in-motion, data-at-rest are both in the cloud. The next step is to hook this up to Azure Time Series Insights, besides many other options you have as part of implementing Lambda architecture using Azure 1st party or 3rd party services. We continue to add more features in Time Series Insights such as root cause analysis and time exploration updates. Read the article here, but here’s an excerpt:

“We’ve heard a lot of feedback from our manufacturing, and oil and gas customers that they are using Time Series Insights to help them conduct root cause analysis and investigations, but it’s been difficult for them to quickly pinpoint statistically significant patterns in their data. To make this process more efficient, we’ve added a feature that proactively surfaces the most statistically significant patterns in a selected data region. This relieves users from having to look at thousands of events to understand what patterns most warrant their time and energy. Further, we have made it easy to then jump directly into these statistically significant patterns to continue conducting an analysis.”

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.