You are currently viewing Personalized Explainable Machine Learning
Dr. Jung in one of his classes on optimization, Alto University

Personalized Explainable Machine Learning

In this blog, Dr. Alex Jung will discuss an information-theoretic approach to personalized explainable machine learning. Alex obtained his MSc and Ph.D. at TU Vienna in 2008 and 2012, respectively. After Postdoc positions at ETH Zurich and TU Vienna, he joined Aalto University as Assistant Professors in 2015. He set up the group “Machine Learning for Big Data” which studies theory and algorithms for machine learning from large-scale data. A recent focus is on the interplay between network structure and personalized predictive models associated with each node in a network-structured dataset. He is also passionate about teaching the basic principles of machine learning. Several thousand students all over and beyond Aalto University have taken his courses on machine learning, artificial intelligence, and convex optimization. The Department of Computer Science elected him as Teacher of the Year in 2018.

Alex Jung

This article is based on the paper by Alex and his collaborator just accepted in IEEE Signal Processing Letters (https://arxiv.org/abs/2003.00484)

 Let’s hear from Dr. Jung!

“Explainability” is a key challenge for humans in making full use of automated decision making based on artificial intelligence (AI) or machine learning (ML). Recent years have brought powerful predictive systems that are able to recognize objects in images, transcribe natural text, or find the best move in a game. In order to apply machine learning to applications with a significant effect on humans, such as health-care, transport, or communication, predictions obtained from ML methods need to be explainable. 

Let me share some of the examples here. Consider you are using a smart grid, and there is a scenario where a machine learning algorithm decides when to start your washing machine. If you do not get a convincing explanation, e.g., at the moment there is too little wind and sunshine to power your dishwasher, people will not accept ML. 

Another example is the processing of social benefit claims. Each decision of the social insurance institute (like Kela in Finland) must be justified based on objective and legal grounds. Having good explanations for such decisions allows us to ensure fair and transparent decision making. 

Another example of the need for explanations for ML predictions is autonomous driving. There might be situations where the car must decide between different scenarios which all involve putting humans at risk. It is instrumental for the acceptance of autonomous cars to provide convincing explanations or justifications for predictions and subsequent decisions. 

Talking about these, we have recently looked at explainable machine learning using tools from information theory. The effect of providing an explanation is modeled as the “reduction in surprise” about a prediction. This approach allows us to model individual levels of surprise by modeling the user’s knowledge or background. So far we know that it is important to provide personalized explanations, taking the training or understanding of a user into account.

In order to construct personalized explanations, we need to get some idea of the user background. To make this approach as widely applicable as possible, we merely require the user “u” to provide some “summary” for a set of training data points (Fig. 1). Mathematically, the summary can be any quantity that potentially depends on (and only on) the data point via its features, u=u(x). The user summaries obtained for the training data points are used to find explanations that are optimal for the individual user. 

Personalization of explanations is important since users of ML systems can have vastly different backgrounds. ML systems are used in applications whose users might either have university-level training in applied mathematics or users with no training in linear algebra at all. While approximating the ML prediction by a linear model might yield good explanations for the former user, they might be not helpful for the later.

An explanation e provided additional information I (y ; e|u) to a user u about the prediction y
An explanation e provided additional information I (y ; e|u) to a user u about the prediction y
A simple probabilistic model for an explainable ML.
A simple probabilistic model for an explainable ML.

Like the user summary, the prediction of an ML method is obtained by applying a (possibly stochastic) map function to the features of a data point:. The predictor map h (.) can be learned from a set of labeled data points (training data). Overall, we use a simple probabilistic model for data points, user summaries, and predictions. This probabilistic model allows quantifying the effect of presenting an explanation to some users of an ML prediction via the conditional mutual information I (; e|u).

This way we have developed an information-theoretic approach to explainable ML. This approach is quite flexible since it allows for very different constructions of explanations. An explanation could be a subset of features, a training example that is similar to the data point being predicted, or a counterfactual. After having developed an information-theoretic lens on explainable ML, the next step is to study and implement algorithms that trade-off explainability with predictive performance and computation.

We have just started to scratch the surface of the information theory underlying explainable machine learning. More work is needed to obtain a fine-grained understanding of fundamental limits and trade-offs between explainability, data, computation and accuracy.

As a summary, we have discussed an information-theoretic approach to personalized explainable ML in this blog. In this approach, explanations are constructed such that offering them to a user reduces her surprise in received a prediction by some ML method. This construction uses a simple probabilistic model for the relations between data points, user summary, and ML prediction.

If you are interested to know more about this topic, follow Alex’s research here:

**Statements and opinions given in this blog are the expressions of the contributor(s). Responsibility for the content of published articles rests upon the contributor(s), not on the IEEE Communication Society or the IEEE Communications Society Young Professionals

Leave a Reply