Monitoring a Production Line With PRTG
Originally published on February 15, 2019 by Guest Author
Last updated on August 31, 2021 • 13 minute read
Think PRTG is only good for monitoring networks? Think again! This article will show you how a Biological Scientist with an IT background and an IT Network Administrator built an inexpensive, yet powerful production monitoring system with PRTG and simple sensors connected to a Raspberry PI. Project code name: Assiduous Ants.
iThis is a guest post by PRTG users, and is a part of our PRTG User Spotlight series. If you have interesting use cases or want to share your PRTG knowledge, submit your idea here!
For a production line, relevant parameters to monitor are the output, overall run-time of production and the stop frequency of the line. While PRTG is traditionally seen as a network monitoring solution, it is far more capable than that. By combining various techniques and ideas, live monitoring of a manufacturing line was implemented that provides valuable data for engineering and business decisions and for improving the output.
Here is the solution by Dominik Wosiek (Coding & Sensor Setup) and Florian Rossmark (Backend & PRTG Integration). Visit Florian's website for more content from him: https://www.it-admins.com/
The production line in question manufactures disposable medical equipment. It is controlled by a central, proprietary open-loop control system that already connects to various sensors along the production line. While it would technically be possible to get information on the output from the control system itself, this is only achievable through a highly costly, proprietary software interface. As a consequence, we did not have any accessible data about how many products were manufactured, at which time, when gaps in manufacturing happened, or any other details. So far, available production statistics were collected manually on paper forms or entered into a digital questionnaire post-production. The manufacturing line or the control system could not be altered or interfaced directly due to the then-necessary and very complex validation and verification standards (ISO 13485/FDA regulated environment).
The concept we came up with: install passive sensors that detect objects passing along the line, collect the data in a central hub for further processing, and provide a convenient graphical user interface for production metrics.
As a processing unit, the obvious choice was a Raspberry PI because of the easy sensor installation and huge amount of available tutorials. We first tested and used ultrasonic distance sensors to directly monitor passing objects, but found that too many false readings occurred as line operators would interfere with the sensors. We found a better solution in the combination of magnetic and pneumatic sensor switches that monitor the movement of stations along the line, e.g. of robot arms that move or weld the objects.
On the software side, we decided to use our own scripts written in Python that process the information and upload it to PRTG Network Monitor. This allows us to collect the data and show easy-to-read real-time graphics and summaries that are constantly updated throughout the day. We used the map feature of PRTG to publish a URL to monitor the current production progress and see gaps in production.
The collected and pre-processed data is further written out in separate logs for each sensor. Twice a day the logs are read out by a Cronjob-triggered script that summarizes relevant production metrics in an email report.
How it Works
As we already mentioned, we need to monitor the stations along the line. The concept is this: if a station moves—let's say a robotic arm changes position—we assume that is working on something, and we increase the integer count (another item has moved down the production line). To monitor this, we use three variables:
- Signal — Has a value of either "LOW" or "HIGH", which depends on the station's configured idle value. For example: if a robotic arm is idle when the circuit is open, it might have a value of "LOW". When the circuit is closed, it has a value of "HIGH". Thus we can assume, in this specific example, that the robotic arm is active when the circuit is closed, and the signal value is "HIGH".
- Sensor — An integer count to count the number of items moving down the line.
- State — State of the monitored station. There are three possibilities: "inactive", "activated", and "was_activated".
With that in mind, here is how we implemented the solution:
We connected the sensors to the Raspberry Pi with shielded, twisted pair cables. The Pi constantly polls the sensors in a 100ms loop and increments "HIGH" and "LOW" counts (between 0 and a certain maximum value to prevent overflow). Event-based interrupts are not possible because of high electromagnetic noise in the area. A "HIGH" signal is subtracted from the "LOW" count and vice versa. This way both states are basically competing against each other, which suppresses sensor noise. False readings are further prevented by applying a hysteresis in the detection.
Here's what happens:
- A sensor starts in an idle state ("HIGH" or "LOW" depending on sensor installation)
- When a sensor switch is activated, the relevant counter (the "HIGH" value, using our robotic arm example from earlier) is incremented in an array until a defined threshold is reached. The station's status then switches from "inactive" to "activated".
- When the station returns to an idle state (in other words, the robotic arm returns to a "LOW" value), the count starts falling. When the count is below a second threshold calculated from the maximum of the array, the state switches to "was_active" and then to "inactive" again. In the "was_active" state, parameters like object count, detection time, etc. are updated.
Updating to PRTG
Every 10 seconds the constantly-running script spawns a separate thread that independently sends object counts, detection time, gap time, temperature, and humidity data to the respective HTTP Push Sensors in PRTG. It was important to push the upload to a separate thread, because otherwise the detection routine would be halted until the upload was finished, which can take up to a few seconds. The decision was made to define one Push Sensor for each sensor switch, as each sensor is also polled by a separate script.
We adjusted the interval time in the PRTG system configuration to 10 seconds, as well as some other values, to get a more detailed view on the Live Graph.
Our next step was to summarize the important data and display it on an overview map in PRTG. We used the Sensor Factory sensor to summarize data from those various sensors and their channels and show them in a single table for counts, and we combined some graphs to show progress.
We also wanted to show how production was doing at every given time in relation to the elapsed production time. We did this by writing a script that constantly sends the current time of day, in minutes, to a separate sensor. This allowed us to then calculate the percentage of time passed in the current production workday vs. the produced products as a percentage of the total goal for the day. Using those two values, it's possible to show if we are on target or not.
This calculation is vital for us to be able to react in real-time to lags in production output. The frequency of production, i.e. output per hour, also provides historic analysis to test improvements.
What we got
This is pretty much what you can see on the image of the map:
The line runs from right to left, and you can see the sensors installed along it. Most sensors just determine how many products passed a certain point of the manufacturing line. There is one sensor, called Bead Rework, that counts rejected parts that need to be run again after a manual correction by line operators.
On the upper left of the picture you can see the production time vs. production count calculation. If the gauge drops into the red area, the production is lagging behind schedule. On the right side you see a simple graph that shows humidity and temperature (these are important for the produced product).
At the bottom left you see a summarized table based on a Sensor Factory sensor that shows how many parts have passed each point of the line.
On the bottom right is a performance graphic. This graphic is important to understand how the line is doing. The white line shows the number of products that have passed a certain sensor. The “green” line shows gaps in time between two counts, i.e. the cycle time. Ideally, the line would be a straight plateau at a very low level. If the line stops or no product is processed, the gap time between parts increases. High gap times mean there was an issue, perhaps due to a higher rejection of parts or because the production line was stopped for some reason.
We created some additional maps that hold more detailed graphs. Selected users can also go in to PRTG and look at historical data.
Once the initial production metrics were available, we became curious for the underlying reasons for production stops. To also collect failures we provided a simple touch interface app to the operators that can be used to log failures and errors in real time. A live summary of daily errors is also included with the maps.
To help demonstrate how we achieved this, take a look at the code and script samples. We've made them available in a ZIP file for you to download. We've included the Raspberry Pi code, and the PowerShell day-time script.
What are your thoughts on this implementation? Do you use PRTG in ways other than for "traditional" network monitoring? Let us know your thoughts in the comments below!