In my last blog, I talked about the use of artificial intelligence to help process manufacturers uncover the holy grail – accurate forecasting. In this blog, I will explain why we might be the barrier to that happening.
In my last blog, I talked about how artificial intelligence might help manufacturers discover the holy grail in manufacturing. Knowing what to make and when to make it, to exactly meet your customer’s demands—an accurate forecast.
The effect on a business would be dramatic: no waste, reduced cost, improved margins, increased efficiency, better customer satisfaction… the list goes on.
However, the adoption rate of AI in manufacturing is not high. Is this because of technology, computing power, or just infancy? Perhaps there is truth in all these reasons, however, I believe there is possibly a bigger problem: us!
We don’t trust what we don’t understand
Artificial intelligence challenges us in trusting that a computer can be as smart as humans. Put simply, if an AI engine provides the same answer as the one we expect, it must be right, but if it provides a different answer, then certainly it must be wrong!
Humans have an inquisitive nature, we are designed to want to understand how solutions are derived. If we don’t understand these mechanics, we immediately dismiss them as wrong or unimportant.
The perfect example of this was Watson for Oncology. The AI solution from IBM promised to make recommendations on the treatment for the 12 main forms of cancer. When doctors started to interact with Watson they found themselves in an impossible situation. If Watson provided a recommendation about a treatment that coincided with their own opinions, the doctors didn’t see any value in Watson’s diagnosis. Artificial intelligence was simply telling them what they already knew.
The real problem came when Watson generated a recommendation that contradicted the doctor’s opinion, who would then conclude that Watson was wrong and couldn’t be trusted. The AI engine’s algorithms were so complicated that it was impossible to explain whether its recommendation was plausible.
As a result, Watson for Oncology’s premier partner, the MD Anderson Cancer Center, dropped the program.
Cognitive Bias
Another problem challenging AI is bias. Bias can occur in many forms, cognitive bias, social bias, statistical bias, data bias or any other sort of bias. Bias is an inaccuracy that is systematically incorrect in the same direction.
Initially, an AI system will be given a set of rules, algorithms and data. It will discover and produce a result or recommendation and will provide feedback of the actual decision taken so that it can learn.
This feedback is a potential problem, the bias from initial data and statistical input can be minimized, but with cognitive bias, the interpretation and actions taken from the results will be made by humans.
These decisions are likely to be biased on the outcomes we wanted, expected or thought they should be (confirmation bias). If the AI learns from this biased conclusion it will not only inherit the bias but is likely to amplify it.
Trusting summary information
Trusting an AI’s solution is critical, but as humans we are naturally untrusting. Given a simple command, “go and stand over there,” our natural response is, “why?” If given a good reason will we comply.
So, when an AI forecasting solution predicts we should buy 1000kg of tomatoes or make 200 tonnes of green paint our first inclination is, “why?” We will want to understand the reasoning for the prediction.
An example of this is based on one of the most advanced AI solution in general use, “The weather forecast” and the numbers are impressive. The UK met office weather forecasting solution comprises:
- AI models that are capable of over 14,000 trillion arithmetic operations per second – that’s more than 2 million calculations per second for every man, woman and child on the planet.
- 2 petabytes of memory enough to hold 200 trillion numbers.
- A total of 460,000 computer cores. These are faster versions of those found in a typical quad-core laptop.
- 24 petabytes of storage for saving data – enough to store over 100 years’ worth of HD movies
The AI engine will produce a data set that will be interpreted and summarized. This is where the problem begins as the diagram below demonstrates. Ask yourself this question based on what you see in each image, “will you take an umbrella today?”
In the first image we have summary data and we will probably look out the window and decide for ourselves, a personal guess.
In the second image we are shown the percentage chance of rain, more information that leads us to a reasonable judgement.
Lastly, when presented with an animation showing the predicted path, density and timing of the rain, we are given the details we need to make an accurate assessment and informed decision.
The base data in all three predictions is the same, but the presentation is totally different, in which we go from a personal guess and reasonable judgement to an informed decision.
Overcoming the barriers
The barriers for artificial intelligence to be globally adopted are known. As humans we must understand that we are one of these barriers. We must challenge the way we think to reach the promises AI could deliver.
Trusting the result and accepting that it may well provide us with an answer that is not wholly expected is also key to AI adoption.
One solution to increasing the levels of trust would be to break the mysterious “black-box” concept of machine learning algorithms and be more transparent about how they work. Then let them work alongside humans, in a competitive manner to see who gets the best results, man or machine?
Involving people in the AI decision-making process has been shown to improve trust and allow feedback for the AI to learn from human interaction. Most of us will never understand the intricate inner workings of an AI system, but if we are part of the process and are provided with more information on how the solution is derived, then we will be more open to trusting AI predictions.
Accepting a simplified result from complex data is still unlikely. In the specific area of complex forecasting I do not believe that a planner would, “buy 1000kg of tomatoes” without knowing why. If, however the planner was given the information used to derive the answer such as:
- Historical forecast was uplifted by 18 percent as weather will be sunny tomorrow
- 5 percent added to this week’s forecast as social media is buzzing about an article on the health benefits of eating tomatoes
- Forecast reduced by 20 percent for the next 11 days as there are promotions running in a competitive retailer
Then perhaps the planner would be more likely accept the detail and make that all important informed decision, the holy grail of manufacturing, an accurate forecast.
If you enjoyed this blog then please look out for others about the team at SPS in IFSBlogs. I welcome comments on this or any other topic concerning process manufacturing.
Connect, discuss, and explore using any of the following means:
- Twitter: @ElkinsColin
- Email: elkins@ifsworld.com
- Blog: http://blog.ifs.com/author/colin-elkins
- LinkedIn: https://www.linkedin.com/in/colinelkins
Follow us on social media for the latest blog posts, industry and IFS news!
Photographer: Maskot
PartzRoot
Trust is really a big issue for AI.
Bees
But without IoT and RPA, AI is not of any use. AI is your system’s brain, but that’s it. AI can decide what is right or wrong, and what you should do, but you need to give it limbs to do something about it. AI without the ability to act out remains “artificial intelligence” not “intelligence action.”