Around 200 years after the start of mechanical production, we currently face Industry 4.0, where physical assets become digitized and integrated along with big data analytics thanks to the advent of IIoT (Industrial Internet of Things), smart sensors, and technological advancements like (private) 4G/5G networks.
Leveraging big data is of vital importance for a business. And it is enabling solutions to long-standing business challenges for industrial manufacturing companies around the world.
Despite the majority of manufacturers producing huge amounts of relevant data, 80% (!) of important business data is still being unused for achieving money-saving implementations like predictive (equipment) maintenance, automated alerting and equipment monitoring, and anomaly detection in complex processes.
Industrial manufacturers also have the opportunity to use these vast data resources to monitor and control costs, optimize the consumption of resources and assets and manage sustainability efforts amid changing regulations.
Implementing Industry 4.0 and the Unified Namespace
The advent of Industry 4.0 facilitates the concept of a Unified Namespace, which basically creates an open, integrated (data) technology platform with all your data from PLCs, MES, Scada, ERP, Edge, HMI’s, and more, meaning you have the relevant data available online in either the cloud or on-premise IT environments.
It is important to be able to access all consolidated data via one interface or a “single pane of glass”, creating a single source of truth.
Many manufacturers are also equipping their products with sensors, which enable product owners to use connective data technologies to monitor the state of those products from afar in real-time, creating new and immense streams of data that can be relevant for (predictive) maintenance 4.0, predictive quality, and other supply chain applications. It is important to be able to add new sensors or data generators into the desired Unified Namespace.
By collecting data from industrial equipment manufacturers use analytics to get important insights, reduce costs, optimize operations and processes, and meet customer expectations. Aligning all data owners and making them agree to share data can be a challenge but is necessary for doing analytics on combined data, published by many data sources.
Multiple departments profit from applying a unified namespace to enable manufacturing, maintenance, and asset analytics.
Current and legacy (Industry 3.0) applications
Some enterprise applications are unfortunately not scalable with open standard connections, so the need for a Unified Namespace that allows massive scalability is even bigger to reduce the dependency of that application.
Manufacturers also need the namespace to be able to achieve closed-loop automation. But before you can start with closed-loop automation or prescriptive analytics, the first step of collecting data and storing it in a central location must be taken.
Our product family 1OPTIC has been designed for this; collecting and processing data published by sensors, PLCs, applications, and other data sources of many vendors, storing it in a generic and vendor-agnostic format. The storage is most efficiently done in a relational database, using ETL (Extract, Transformation, and Load) principles among others.
Consider the 1OPTIC platform to be your ideal Unified Namespace.
Use-cases & Solutions
1OPTIC is the data platform that allows you to do just that: be the first to unlock, bundle, verify and process all data from all sources
1OPTIC serves as the Unified Namespace within manufacturing companies. The modular architecture allows for integration within existing business tooling and processes, making it a tailored fit for your use case.
It is the perfect central data repository for the following use-cases;
Reactive maintenance is the easiest of the industrial equipment maintenance solutions we offer.
It can take place by observing (possibly with automatic notifications, see the automated alerting and equipment monitoring solution) the fact that a data stream exceeds a certain value.
For example, pump X of machine Y must be maintained every 1500 operating hours. Or: pump X of machine Y should be serviced if operating hours are >1500 and extra power consumption > 2%.
Historical data then shows that the filters must be replaced within 150 to 250 operating hours. This makes it easier to plan the maintenance with engineers and prevent equipment breakdown due to lack of maintenance.
As soon as the data processing is in order and enough historical data has been stored in the central data repository or Unified Namespace, predictive maintenance can be started.
An important reason to get started with this is the savings potential. With preventive maintenance, there are fixed moments and limits when maintenance is carried out. The maintenance takes place on the basis of prevention, but perhaps could or should have taken place later or earlier. By being able to predict when each asset needs maintenance, the lifespan can be extended (longevity) and costs saved.
Another advantage is the prevention of equipment breakdown, by timely insight into when an asset is due for maintenance. This prevents equipment for preventive maintenance from breaking down. Because preventive maintenance can also be too late, which can have enormous consequences. There may be (rapidly changing) external influences that determine the lifespan and need for maintenance. By combining this data, predictive maintenance can take place.
The predictions can be made with various Machine Learning techniques: for example, linear regression to predict the RUL (Remaining Useful Lifetime), classification to predict failure in a certain time span, and anomaly detection (see the other solution) to monitor suspicious situations.
The holy grail in this area is the next level of data maturity: prescriptive analytics. With prescriptive maintenance, the predicted maintenance is carried out automatically, without the intervention of people.
This is still in its infancy and it is a serious challenge to reach this stage. We are currently exploring the possibilities with some of our customers to see how this will be achieved.
Real-time quality monitoring
Before data can be used for descriptions, diagnostics, prediction, or prescription, it must be collected. 1OPTIC offers the possibility to unlock and process data from all kinds of different sources (such as PLC, MES, ERP, HMI, IoT devices, Edge, SCADA, WMS, sensoring).
After access and processing, 1OPTIC stores all data in a PostgreSQL-based database solution. To then describe or diagnose the data, 1OPTIC offers a very comprehensive dashboard solution. Of course, the data can also be brought to your own dashboard and BI solutions. It is because of the central processing and management of data that the way is clear to get started with it. A “Single pane of glass” is realized.
In practice, we see that in the first instance the data is mainly used to follow the daily course of events. The data from machines and devices about use, capacity, hours, energy consumption, pressure, temperature, humidity, and viscosity can be seen in (near) real-time by you and/or your customer. Furthermore, the data can be compared with other data (think of ERP data) and correlated.
Automated Alerting and Equipment Monitoring
Many equipment and assets provide the possibility to report malfunctions immediately as standard.
In many cases, we see that these malfunctions are remedied ad hoc, reactively. This can be done more efficiently. By implementing rules to continuously monitor all sorts of data. If a certain threshold is exceeded, an alert can be sent automatically.
In this way, malfunctions can be prevented because the action can be taken earlier, on the basis of previously detected degradations. Preventing malfunctions is very important, partly because many costs can arise as a result of malfunctions.
It is therefore important to automatically receive an alert if a malfunction occurs during the equipment monitoring or other crucial matters of which it is important to be informed quickly.
1OPTIC offers the modular solution TripAiku, which facilitates the (near) real-time monitoring of important equipment using smart triggers. With the help of TripAiku, it is possible to automatically weigh or prioritize notifications in a smart way with help of advanced algorithms. In this way, alarm fatigue, the situation in which there are so many alerts that the forest is no longer seen for the trees, can be successfully counteracted.
Anomaly detection is the use of algorithms to make abnormalities (anomalies) visible in measurements.
Centrally available data offers the possibility to determine deviations at a glance, this is possible with the well-known dashboards. However, there are certainly other reports possible. For example, an immediate warning if the temperature in a process is an X amount of percentage above or below value Y.
An abnormality can take many forms. Luckily, it does not always mean that something is wrong, as long as the deviation remains within predefined margins. The active analysis of anomalies provides more insight into the behavior of equipment and contributes to working more efficiently.
These deviations are easily perceptible via SCADA, MES, or other systems. But these often do not offer the possibility to store data for a longer period of time.
This causes the issue that determining the margin of deviation is often based “on gut feeling” and not on the actual, historical data. It is precisely by interpreting historical data that anomalies can be better observed and be better responded to. It is essential if you want to operate data-driven as an organization.
See it for yourself
Contact us to learn how we can help you maintain your innovative power and ensure business growth.