Model-level graph neural network (GNN) explainers identify general graph patterns that significantly contribute to a GNN's prediction of a target class. However, current studies often lack proper constraints during the generation of the explanation, resulting in unreliable and non-typical graph patterns that limit the quality of explanations. To address this issue, we propose MOSE, a simple yet effective model-level explainer that learns MOdel-level explanations via a Subgraph order Embedding space. MOSE employs a graph encoder to learn an embedding space that preserves subgraph relationships among graphs in order. A score function with a greedy sampling strategy is then introduced to efficiently generate graph pattern candidates under the constraint of the subgraph order embedding, ensuring that the candidates are reliable and typical patterns of real data. The explanations are further selected from the candidates based on the predicted probability by the GNN to be explained. Additionally, by constructing induced graphs, we extend MOSE to the node classification task, which has rarely been studied before, enhancing the generalization of MOSE. Extensive experiments conducted on two synthetic datasets and six real-world datasets demonstrate the effectiveness of MOSE across various metrics, including predictive accuracy, model utility, and model efficiency.
Keywords: Graph Neural Networks; Greedy search strategy; Model explainability; Model-level explanation; Subgraph order embedding.
Copyright © 2025 Elsevier Ltd. All rights reserved.