Predictive maintenance and the future of manufacturing with IoT

Predictive maintenance and the future of manufacturing with IoT

Predictive maintenance, processes designed to prevent full-blown crises to everyday waste and inefficiency, is the most valuable application of harnessing and analysing masses of IoT data in manufacturing, argues Wael Elrifai.

Unless you’re confident that every part of your manufacturing operation is performing at peak efficiency, then there’s enormous value in exploring the world of predictive maintenance for IoT. There are certainly worse ways to improve than having more and better information.

You might argue that, “We already capture data from around the operation; ERP, CRM, EAM and a jumble of other acronyms. That’s what our Enterprise Data Warehouse (EDW) was for!”

That is right, that’s what your EDW was for: processing all that well-organised operational information. However, it simply isn’t agile enough to support today’s big data IoT uses cases. It’s not an experimentation and exploration platform. It can report on the world as it is, not imagine how it might be.

Not sure? Ask your company data scientists and software engineers how long it would take them to “land” usable data in your EDW to start the exploration process. How would they analyse and store data from thousands of sensors in your factory and back-office operations, each providing different data structures and formats at different speeds?

This is where the Hadoop Big Data infrastructure comes in – it’s fundamentally different. With Hadoop’s ‘schema-on-read’ structure, for example, you don’t need to design the data structures and flows in advance. This is the opposite of the highly-structured EDW.

In the last decade, manufacturers struggled to justify the enormous expense of EDW development – it’s much harder than making the case for, say, physical warehouses. ROI estimates ranged from being educated to wild guesses and would often be linked to the success of a dependent initiative with equally difficult outcomes to measure, like sales and operations planning. Most EDW projects “failed” in that they missed time, scope, or budget targets.

With Hadoop’s low cost of entry and totally different architecture, it no longer has to be this way. Schema-on-read allows us to “ingest” any data format and choose the best way to analyse and synthesise it at exploration-time. There is no massive up-front expense associated with standardizing structures, KPIs, access, and so on.

In other words, the analogous item to the big data IoT deployment and data science is not the EDW and associated reports, dashboards and alerts. Hadoop deployment is more like the Proof-of-Concept (PoC) stage of your EDW project, something for which calculating an ROI makes no sense.

If you’re with me this far, how do we take the next step?

When advising companies, I start by asking a simple question: “What could we do if we had perfect, universal, and timely information?” (Data without errors, all data available to humankind, and available ‘on demand’.) In this context, “information” includes data itself, or information derived from data. Start with the presumption that such data exists that would allow you to predict anything of value.

This forces us to think about where inefficiencies could be, not just where we know they are. Try the exercise, and see what you come up with!

Now let’s go back to the original premise, that there are efficiencies to be had and that ROI shouldn’t be measured in this PoC phase when setting up an IoT infrastructure. Here’s how to minimise costs and risks:

  • DO verify the ability to capture data from “things” by adding sensors, communications networks, and so on.
  • Do verify that you have good maintenance logs describing previous failure modes/conditions.
  • DO use trusted software vendors with extensive Hadoop experience
  • Do engage high-level resources with IoT experience
  • DO take advantage of GUI-based tools
  • DO hire real data scientists, not just statisticians. They should understand machine learning techniques and be able to describe benefits and drawbacks of major classes of techniques without referring to notes.
  • DO engage subject matter experts to work with your data scientists
  • Finally DO make sure you can blend your existing data with new data types.

Your shopping-list for big data tools should include:

  • Proven (live deployments)
  • Stable
  • Transparent (open source)
  • Auditable
  • Adaptable & embeddable
  • Major-releases at least yearly

There’s lots to consider here, but the advice comes from hands-on experience working with manufacturers like KDS, Caterpillar and Halliburton Landmark, whose predictive maintenance applications are working today to improve safety, prevent environmental disasters and improve factory throughput. Most would agree that these kinds of returns are well worth the investment.

Wael Elrifai is Director of Enterprise Solutions and Big Data Guru at Pentaho.