Machine learning models, in particular deep neural networks (DNNs), are characterized by very high predictive power, but in many cases, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, and to identify potential data selection biases or artifacts. This demo shows how decisions made by systems based on artificial intelligence can be explained by LRP.
A simple LRP demo based on a neural network that predicts hand-written digits and was trained using the MNIST data set. You can also try it using you own hand-writing.
A more complex LRP demo based on a neural network implemented using Caffe. The neural network predicts the contents of pictures.
A LRP demo that explains classification on natural language documents. The neural network predicts the document semantic category.
A demo where you can ask an AI questions about an image and instantly get an answer. The AI not only answers your question but also shows you relevant parts of the image.
"Layer-wise Relevance Propagation" (LRP) technique by Bach et al. ("On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", 2015)