Edit This Guide Record
Guides Strategy Battling IoT Scale with Cognitive Predictive Maintenance

Battling IoT Scale with Cognitive Predictive Maintenance

Published on 07/06/2017 | Strategy

95 0

Anita Raj

Head of Growth Hacking . DataRPM

IoT GUIDE

Scratching the IIoT surface

A stitch in time saves nine,” they say, and it’s never held good more than it does today when the world around us is smartening up to save precious time.

IoT, having had its time in the sun as the bearer of change for everyday life as we knew it a few years ago, has paved the way for its but logical cousin, the industrial internet of things (IIoT), to take center stage. But has this merely added to the noise in our lives? Apparently, it has. Organizations are already reeling under the onslaught of all the information streaming in from millions, maybe billions, of sensors, devices, and machines. How does one even begin to manage information that’s running into petabytes?

Case in point: According to a recent survey, about 85% of executives at large organizations believe that the IIoT will have a sizeable impact on their industry in the next three years. Adoption, however, trails behind belief – merely 10% of these executives reported widespread adoption of IIoT within their industries. This trickled down further when it came to organizational adoption and application. A mere 1.5% of the executives reported organizational clarity in terms of leveraging IIoT.

Diving in – leveraging all the data

So, what gives? In truth, the problem with managing such ginormous amounts of data is two-fold – first, there’s the already established problem of scale. This is terabytes of data we’re talking about. It’s insane to even think that processing it for patterns, insights, and models is a human job, let alone doing it in a timeframe that makes a difference. The only way to solve this problem is through automation. We need machines to be able to autonomously identify the normal from the anomaly to flag incidents requiring attention before they get out of hand. That is, from all the data “noise,” they filter out the “signals” – thus helping organizations zero in their focus on what requires their attention. They also need to be able to recognize patterns and generate insights in real time to facilitate smarter decision-making on the go. To draw a parallel, imagine this being done by a health professional –  would it be possible to review data from thousands of patients in real time, and accurately identify when to send out an emergency flag? No way. Similarly, writing algorithms to scour through reams of data to find patterns is not only unimaginably time-consuming but also prone to several errors and limited to previously identified patterns. Another vital point to note is that manual analysis is done for one dynamic variable, keeping all others constant – which is far from the reality in the real world. Machine learning, on the other hand, is done in real, dynamic scenarios – with all parameters behaving normally. In a nutshell, machines will process enormous amounts of data in a mere span of milliseconds, for as long as machines are running.

Second, there’s the challenge of implementation. The process of implementing the said automation can’t follow a simple, straightforward displacement technique. We’ve already established that most companies have some if not all of the data they need for processing. So, the next step of automating the process of generating insights needs to be done using the legacy infrastructure that’s already generating the data. This is more complicated than a simple “buy new, replace old” method since existing systems need to be manipulated, or taught, to behave differently. It’s a process of recalibrating, reprogramming, and reworking already existing systems to use the information they’ve collected for any of a variety of purposes – avoiding machine failure, increasing output, or minimizing maintenance costs. Of these, predicting and avoiding machine failure in itself could save companies millions of dollars on an annual basis. Not only does it reduce downtime and maintenance costs, it also empowers companies to maximize yield!

Another reason for the slow pace of adoption is the lack of clarity regarding the purpose of IIoT. The idea behind amassing all this data was to be able to identify patterns and trends, which could then be used to optimize processes. Similarly, in industries, machine data could be used to analyze repair/failure patterns and identify the conditions that cause/lead to these instances. Voila! You can now come up with a predictive maintenance plan to avoid machine downtime!

Cognitive Predictive Maintenance: The need of the hour

68% manufacturers are now investing in data analytics, of whom 46% agree that leveraging industrial machine data is no longer a mere option. With the pace at which industrial production is happening currently, the last straw would be an asset breakdown. And with budgets shrinking and pressure mounting every day, industrial leaders need smarter ways to enhance their output – smarter maintenance is the first step ahead in that direction.

The way it stands today, digital data is expected to touch anywhere between 30 billion (IDC) and 75 billion (Morgan Stanley) zettabytes by 2020. The only way to realize the true potential of IIoT is to stop merely amassing bales of data and to instead also leverage it to anticipate issues, nip them in the bud, and move ahead, learning from any new issues all along the way. So it really isn’t a matter of choice anymore – to not only stay relevant but to get ahead, you need to automate, automate more, and automate yet again. Receive, process, guide. You need to automate data collection, analysis, synthesis, and modeling. It’s like moving to an age of super-efficiency, where there’s no downtime. Imagine the implications. No downtime means no delays, more output, faster output…and savings! Time and money, both. Who’d ever say no to that?

 

This article was originally posted on LinkedIn.

test test