Advances in computer architecture have led to state-of-the-art performance of machine learning models in fields such as text and image classification, disease detection, to name a few. Deploying these models remains challenging in finance or medicine, where a significant risk element is associated with the decision making process. To bolster adoption of such systems, the AI community has recently focused on developing explanation models, which aim to help AI-assisted systems users trust the algorithms by using human-interpretable concepts to explain the model output. The explanation model uses an expensive search procedure, so explaining at scale is challenging. This talk shows how Ray can be used to distribute explanations on a Kubernetes cluster and thus reduce the time needed to explain multiple instances. Ray can thus be used to explain a model’s behaviour on a large dataset, allowing data scientists to draw insights into their models. Crucially, the explanations of the model behaviour across an entire dataset can be indicative of biases in the training data. Thus, by leveraging Ray data scientists can ensure the systems they develop are fair and transparent.
Watch the full talk from the conference below: