Artificial intelligence is becoming a part of our daily lives- right from the image and facial recognition to analytics and personalized systems- we find that the need to trust these systems is paramount. AI is finding its ways into industries like education, health care, manufacturing, and finance among others. Many of the algorithms applied in the field of AI and machine learning are not able to examine to find how and why the decision has been made. So, to ensure that computer works as expected and produce transparent explanations for the decisions they make, explainable AI can be used.
Explainable AI refers to methods and techniques in the application of AI solutions such that the results of the solution can be understood by human experts. The central idea of explainable AI is to understand the predictions of a Machine Learning model. The motive is to make the model as interpretable as possible to help in testing its reliability and causalities of features. There are two major dimensions of interpretability:
Explainability (Why did it do that?)
Transparency (How it works?)
Explainable AI systems help in assessing the model input features and identify features that are driving force of the model. Moreover, we use Explainable AI models to get a sense of control to decide if we can rely on the predictions of these models. We need of Explainable AI in Financial Services for building Trust, Transparency, Unintentional Bias, and Adoption. For better understanding, Nicklas Ankarstad explained interpretability models at the 5th CULytics Summit.
Interpretability Models
It would be great if we can get a better insight into model predictions and improve our decision making. With the advancement of explainable AI, it has become quite easy to do so. Nowadays, machine learning models are ubiquitous and becoming an inseparable part of our lives. The concept of explainable AI contrasts with the concept of “Black Box” in Machine Learning where even the designers can’t explain why AI arrived at a specific decision. From smart speakers with in-built conversational agents to personalized recommendation systems and in between everything, we use daily but do not know why they behave in a certain way. We have given them the ability to influence our decision, so it is important to trust them. And, with the help of explainable AI systems, we can easily understand the inner working of such models. Let’s understand them:
-
LIME (Local Interpretable Model-Agnostic Explanations)
LIME, an algorithm, can explain the prediction of any classifier or regression in a faithful way, by approximating it locally with an interpretable model. Lime is locally faithful, but not globally accurate. It consists of detailed information on why an individual prediction was made. The idea of LIME is to explain locally in the vicinity of the instance rather than producing explanations at the entire model level.
-
SHAP (SHapley Additive exPlanations)
SHAP uses a theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanation using the classic shapely values from game theory and their related extensions. SHAP model development approach is developed by Lundberg and Lee. It is a strong mathematical foundation based on the SHAP game theory in which each player in a corporation is rewarded based on how important they are to the organization. The best thing about SHAP is that it is globally consistent. Its global importance is consistent with local importance since it’s an aggregation of local importance. It is also very accurate for tree-based models.
-
Explainable Boosting Machine
Explainable Boosting Machine uses modern machine learning techniques like bagging, automatic interaction detection, and gradient boosting to breathe new life into traditional GAMs (Generalised Additive Models). It has an open-source library and provides a platform to state-of-the-art machine learning techniques like random forests and gradient boosted trees. This can be useful to understand the model’s global behavior and the reason behind the individual prediction. The dashboard of EBM is known for its nice user interface. Unlike black-box models, EBM provides lossless explanations and are editable by domain experts. Once we understand, why the model is predicted in a certain way; we will be able to build trust with the model that is critical for interaction with machine learning.
So, when to use which model? Well, it depends on the requirement. If you want to know how Explainable AI models make decisions, you can ask it to your data scientists or data science vendors for better understanding. Credit Union members are loyal members as they trust their Credit Union. And, members need to know why they receive lower credit limits and why they receive a specific product. It is important to maintain trust and transparency in your credit union as Explainable AI can help.
Comments