by   |    |  Estimated reading time: 4 minutes  |  in Digital Transformation   |  tagged , ,

As IFS integrates artificial intelligence (AI) into our enterprise software products, we are paying a great deal of attention on how to ensure decisions made by AI are explainable. Explainability is important so people can grasp what AI is doing and why so they can do their own jobs better. And they need to explain to auditors, regulators or even litigators how decisions or determinations were arrived at.

Even if intelligence is artificial, human comprehension and discretion must be organic. Enterprise software companies like IFS have a few tools at our disposal to make explainable AI omnipresent in the system, shortening time to value and baking in explainability.

The single most common of these will be considered old-school by those on the bleeding edge, but yes, I am talking about event-driven business logic of enterprise resource planning (ERP) software.

explainable AI

Harnessing the business logic for Intelligent Process Automation

ERP and other systems of record can drive intelligent process automation (IPA), using the underlying business logic of software. Business software encompasses processes and procedures that are well understood. Collating and analyzing the processes and the results they produce, machine learning (ML) models can—with sufficient transactional history—improve these automated processes.

But isn’t the point of AI and ML to replace the use of business rules? We really need both. We need business rules that define how decisions are made in the business, but we also need those rules to change based on unfolding information and that is where ML comes into the picture. An ML model can evaluate the outcomes of rule-based decisions and revise rules in an explainable fashion. Ultimately ML may not only be able to augment existing rules, but to suggest new ones.

The “explainable” advantage of this approach can be even better understood by looking at the two levels of interpretability it comes with: local and global explainability.

Local explainability

Local explainability is a simple and visual audit trail to determine how a decision was made and it should always be offered when feasible. Why did the model suggest we double our inventory of this specific stock keeping unit, spare part or raw material? Why are we reducing the periodicity of maintenance on a given class of assets in the oil field? Here, we like visually intuitive methods for local explainability—in some cases, inherently explainable models like decision trees can be used to solve a problem and such choice in itself facilitates explainability.

What is explainable AI

Global explainability

Sometimes a broader insight into how the model actually works is required, among other things to make sure that the model isn’t interpreting data in a biased way. For example, there have been cases of biased models that penalized certain ethnicities or social groups when recommending whether or not to grant loans. Global explainability in other words tries to understand at a higher level the reasoning of a model, rather than focusing on the steps that led to a specific decision. A global explainability approach can also help ML technicians to tweak and adjust the model’s decision-making process for better performance and quality.

In explaining how AI makes decisions, there are two methods we may often use in enterprise software, and each has its place depending on the use case.

One of these methods, the Shapley Additive explanation (SHAP) framework, is based on the game theory work of Lloyd Shapley and it allows to “reverse engineer” the output of a predictive algorithm and understand what variables contributed the most to it.

Similarly to SHAP, Local Interpretable Model-Agnostic Explanations (LIME) can help to determine feature importance and contribution but it actually looks at fewer variables than SHAP and can therefore be less accurate. But what it lacks in consistency it makes up for in speed.

In our work at IFS, depending on the use case, we may use either SHAP or LIME to deliver explainers for our AI processes to end users. If the model is fairly straightforward and the use case not too sensitive, LIME may suffice but a complex model in a highly regulated or mission-critical industry may require more effort and resources to grant the appropriate level of insight.

Explainable AI

Why ask why?

As humans, we want to understand decisions before we accept them, at least to some extent. In order to create the necessary comfort to facilitate the adoption of a technology like AI that promises to help us with intelligent decisions, we can start by extending the underlying and familiar business logic of existing enterprise software products as a native IPA engine. If you already license business logic, let AI help you take that logic to the next level.

Read similar blogs here.

Do you have questions or comments?

We’d love to hear them so please leave us a message below.

Follow us on social media for the latest blog posts, industry and IFS news!

LinkedIn | Twitter | Facebook

Leave a Reply

Your email address will not be published. Required fields are marked *