We are delighted to announce the release of Alibi Explain v0.7.0 featuring a new explanation technique and a method for summarising datasets and creating interpretable classifiers.
Similarity explanations—a new class of explanations
Explanation algorithms can be categorised by the type of insight they provide to the user. One overlooked class of explanations is similarity explanations which justify model predictions on a data point by finding similar data points in the training set.
We introduce a new method GradientSimilarity (Charpiat et al., 2019; Hanawa et al., 2020) which explains the predictions of gradient-based (PyTorch and TensorFlow) models by scanning the training data to find the most similar data points, from the model point of view, to the one being explained. For example, given a model trained on ImageNet and a photo of a Golden Retriever, the explanation of the (correct) prediction is a list of the most similar images, taking into account the model predictions. In this case the most similar instances are a set of Golden Retrievers which are also predicted to be Golden Retrievers by the model. This kind of an explanation is very intuitive and can be thought of as the model justifying its prediction by referring to the most similar data points it was trained on.
The method works on any PyTorch or TensorFlow model for which the loss function used to train it is known, and it supports any data modality (images, text, tabular data etc.). For more examples please refer to our documentation.
Dataset summarization with prototypes
Gaining insight into a dataset can be hard, especially if the dataset is large or has a large number of features. There are many methods for dimensionality reduction such as t-SNE or UMAP which can be used to visualize high-dimensional datasets. A different option is to instead reduce the size of the dataset by summarizing (distilling) it into a manageable set of “prototypes” that are representative of the entire dataset.
We introduce a new subpackage alibi.prototypes featuring methods for summarizing datasets by picking a representative subset of the data called “prototypes” chosen algorithmically. Specifically, we introduce the method ProtoSelect (Bien and Tibshirani, 2011) originally designed to facilitate interpretable classification.
There are many use cases for data summarization with prototypes. For example, applying it to CIFAR10 (or any dataset with class labels) we can gauge diversity of each class by the number of prototypes returned. The following suggests that there is much higher diversity between images labeled “car” (perhaps owing to the different colours) than images labeled “airplane”.
Not only do prototypes allow us to gain a better understanding of large datasets, it is also a pathway to interpretable classifiers. For example, given a set of prototypes, we can construct a 1-nearest neighbour classifier by simply predicting the class of the prototype closest to the test instance. In our experiments we found that ProtoSelect is the best-performing prototype selection method based on evaluation of 1-nearest neighbour classifier accuracy.
ProtoSelect works on any data modality. For more information please refer to our documentation.
Quality-of-life improvements
The v0.7.0 release also features several quality-of-life improvements for data scientists and developers. Most notably we have introduced optional dependency management which has allowed us to “slim down” the core package. For example, gradient-based frameworks such as TensorFlow and PyTorch have been made optional to optimise the installation experience for users who wish to use methods that are not dependent on these frameworks. We have also extended Alibi Explain support to Python 3.10.
References
Bien, Jacob, and Robert Tibshirani. “Prototype selection for interpretable classification.” The Annals of Applied Statistics 5.4 (2011): 2403-2424.
Hanawa, Kazuaki, et al. “Evaluation of similarity-based explanations.” arXiv preprint arXiv:2006.04528 (2020).