Explainability for Graphs with Pytorch Geometric and Captum
In this Colab Notebook we show how to use explainability methods on Graph Neural Networks.
The MUTAG dataset contains molecules, represented as graphs. Each node is an atom, and each edge is a bond between atoms. We first train a simple graph classifier, then explain its predictions with different interpretability methods.
We will use Pytorch Geometric for the manipulation of the graph data structures and the design of the Graph Neural Network. We will use Captum and GNNExplainer for graph interpretability.