Subscribe to Get All the Blog Posts and Colab Notebooks 

What is explainable artificial intelligence (XAI), and why do we seek explainability & interpretability in AI systems?

Figure 1. Photo by Mitch Nielsen on Unsplash

This might be the first time you hear about Explainable Artificial Intelligence, but it is certainly something you should have an opinion about. Explainable AI (XAI) refers to the techniques and methods to build AI applications that humans can understand “why” they make particular decisions. In other words, if we can get explanations from an AI system about its inner logic, this system is considered as an XAI system. Explainability is a new property that started to gain popularity in the AI community, and we will talk about why that happened in recent years.

Let’s dive into the technical roots of the problem, first.

Artificial intelligence as an enhancer in our lives

Technological advancements certainly enable us to have access to better services with more convenience. Technology is an indispensable part of our lives. Its benefits certainly outweigh its harms, and like it or not, its impact on our lives will only increase. The invention of computers, the internet, and mobile devices have made our lives much more convenient and efficient.

After the computers and the internet, artificial intelligence has become the new enhancer in our lives. From the desperate attempts of math departments in the 50s and the 60s to expert systems in the 90s, we have reached today. We can use cars on auto-pilots, use Google translate to communicate with foreigners, use countless apps to polish our photos, find the best restaurants using smart recommendation algorithms. The effect of AI in our lives will only increase. Artificial intelligence is now an indispensable and undisputed enabler in our lives, which enhances our life quality.

On the other hand, AI systems have already become so complicated that it has become pretty much impossible for a regular user to understand how it works. I am sure that no more than %1 of Google Translate users know how it works, but we trust the system anyway and use it extensively. But, we should know how the system works, or at least, we should be able to get information about it when necessary.

Too much focus on accuracy

Mathematicians and statisticians have been working on traditional machine learning algorithms such as linear regression, decision trees, and Bayesian networks for hundreds of years. These algorithms are very intuitive and developed way before the invention of computers. When you decide based on one of these traditional algorithms, it is easy to generate explanations. However, they achieved accuracy only to a certain level. So, our traditional algorithms were highly explainable, but only a bit successful.

Then, everything changed completely after the invention of McCulloch-Pitts neurons. This development led to the creation of the field of Deep Learning. Deep Learning is a sub-field of artificial intelligence that focuses on replicating neuron cells’ working mechanisms in our brain using Artificial Neural Networks. Thanks to increased computing power and optimized open-source deep learning frameworks, we became able to build complex neural networks with high accuracy performances. AI researchers started to compete to achieve the highest accuracy level possible.

This competition certainly helped us build great AI-enabled products, but it came with a price: low explainability.

“For so long, we focused on increasing the accuracy performance of AI systems, we forgot to pay attention to interpret and explain their decisions.”

Neural networks can be extremely complex and difficult to comprehend. They can be built with billions of parameters. For example, Open AI’s revolutionary NLP model, GPT-3, has over 175 billion machine learning parameters, and it is challenging to derive any reasoning from such a complex model.

Accuracy vs. Explainability Performances of Machine Learning Algorithms

Figure 2. Accuracy vs. Explainability Performances of Machine Learning Algorithms (Figure by Author)

Figure 2 shows the relationship between the accuracy and the explainability of machine learning algorithms.

As you can see, an AI developer has a lot to lose by resorting to traditional algorithms over deep learning models. So, we see more and more accurate models every day with lower and lower explainability properties. But, we need explainability more than ever.

Let’s see why!

1. AI systems are increasingly used in sensitive areas

Figure 3. Photo by DON JACKSON-WYATT on Unsplash

Remember the old times when people used swords and rifles in warfare. Well, it is all changing, faster than you think. Smart AI drones are already capable of taking out any person with no human intervention. Some militaries already have the capability to implement these systems; however, they are concerned with the unaccounted results. They wouldn’t want to rely on systems that they have no idea how it works. Well, as a matter of fact, there is already an ongoing XAI Project under The U.S. Defense Advanced Research Projects Agency.

Self-driving cars are another example. You can already use a Tesla on autopilot. This is a great comfort for the driver. But, it comes with great responsibility. You would want to know what your car would do when it encounters a moral dilemma, where it has to choose the lesser of two evils. For example, should an autopilot kill sacrifice a dog’s life to save a person on its path? You might also see our collective morals and your personal morals on MIT’s Moral Machine.

AI systems are affecting our social lives to a greater extent every passing day. We need to know how they make decisions (i) generally and (ii) in a particular event.

2. Exponential advancements in AI may create existential threats

Figure 4. Photo by Cata on Unsplash

We all watched Terminator and see how machines can become self-conscious and possibly destroy humanity. AI is powerful, and it can help us become a multi-planetary species as well as destroy us entirely in an apocalypse-like scenario. In fact, research shows that more than 30% of the AI experts expect either bad or extremely bad outcomes when we achieve Artificial General Intelligence. Our most powerful weapon to prevent disastrous outcomes is to understand how AI systems work so that we can employ checks & balances, just like we do on governments to limit their excessive powers.

3. AI-related dispute resolutions require reasoning and explanation

Due to the developments in the last two centuries in human rights & freedoms, current laws and regulations already require a level of explainability in sensitive areas. The field of legal argumentation and reasoning deals with the boundaries of explainability. Just because AI-enabled applications took over some traditional occupations, it doesn’t mean that their controllers are not liable for providing explanations. They must abide by the same rules and must provide explanations for their services. General legal principles require explanations for automated decision-making when a legal dispute arises (e.g., when a Tesla on autopilot hits a pedestrian).

Figure 5. Photo by Tingey Injury Law Firm on Unsplash

But, general rules and principles are not the only reasons for mandatory explainability. We also have several contemporary laws and regulations, creating different forms of the right to explanation.

The General Data Protection Regulation (GDPR) in the European Union already defines a right to explanation, and it necessitates reasonable explanation on the logic of the AI systems when a citizen is subject to automated decision making.

On the other hand, in the United States, citizens have the right to receive an explanation for their credit application rejections. In fact, just as Figure 2 foresees, this right forces credit scoring companies to adopt a regression model (which is more explainable) when scoring their customers so that they can provide the mandatory explanation.

4. Eliminating historic biases from AI systems requires explainability

Figure 6. Photo by Clay Banks on Unsplash

Humans are historically discriminatory, and this is also reflected in the data we collect. When a developer trains an AI model, she feeds the historical data with all its biases and discriminatory elements. If our observations have racist biases, our model will also project these biases when it makes predictions. To prove this, Bartlett’s research shows that in the United States, at least 6% of the minority credit applications are rejected due to purely discriminatory practices. So, training a credit application system with this biased data would have devastating effects on the minorities. As a society, we must understand how our algorithms work and how we can eliminate biases so that we can guarantee liberté, égalité et fraternité of our society.

5. Automated business decision making requires reliability and trust

Figure 7. Photo by Austin Distel on Unsplash

Explainability makes sense from a financial point of view as well. When you are using an AI-enabled system that recommends particular actions for an organization’s sales and marketing efforts, you would want to know why it recommends this particular action. Decision-makers must understand why they need to take a particular action since they will be held accountable for this action. This is significant for both real-sector businesses as well as financial sector businesses. Especially in the financial markets, a wrong move can cost the firm dearly.

Final thoughts

As you can see, there are several valid points on why we need explainable AI. These points come from various disciplines and fields such as sociology, philosophy, law, ethics, and business. To sum up, we need explainability in AI systems because:

  • AI systems are increasingly used in sensitive areas
  • Exponential advancements in AI may create existential threats
  • AI-related dispute resolutions require reasoning and explanation
  • Eliminating historic biases from AI systems requires explainability
  • Automated business decision making requires reliability and trust

Next, I will dive into what our society is doing in political, legal, and scientific spheres to boost our automated decision-making systems’ explainability.


If you are reading this article, I am sure that we share similar interests and are/will be in similar industries. So let’s connect via Linkedin! Please do not hesitate to send a contact request! Orhan G. Yalçın — Linkedin

Subscribe To Our Newsletter

Subscribe To Our Newsletter

If you would like to have access to full codes of the Medium Posts on Google Colab and the rest of my latest content, just fill in the form below:

You have Successfully Subscribed!