IJCAI 2021 Tutorial onTheoretically Unifying Conceptual Explanation and Generalization of DNNs |
||
2021/08/21 0:00 am GMT - 2021/08/21 3:00 am GMT |
The interpretability of deep neural networks (DNNs) has been an emerging direction in recent years. However, current studies usually explain DNNs from diverse perspectives without a unified theory to bridge different explanations of a DNN. In particular, explaining concepts encoded in DNNs and explaining the generalization power of DNNs are two mainstreams in XAI, but these studies were developed independently without any theoretic connections. Consequently, the explanation of concepts encoded in DNNs usually cannot effectively boost the performance of a DNN, while the mathematic proof of the generalization power of a DNN does not explain the emergence of mid-level semantics in DNNs. Therefore, considering both the future development and the trustworthiness of XAI, it is crucial to propose a theory to explain a DNN’s distinctive signal-processing behaviors of encoding different concepts, and meanwhile, bridge the encoded concepts and the DNN’s generalization power.
Therefore, this tutorial mainly introduces the speaker’s recent six studies (including two papers in ICLR 2021, two papers in AAAI 2021, and two arXiv papers submitted to ICML 2021). In these studies, the speaker has proposed the multi-order interaction and the multivariate interaction in game theory. The speaker has proven that such game-theoretic interaction can successfully explain both visual concepts encoded in DNNs and their generalization power. More specifically, this tutorial will introduce the following issues.
Explaining all the above issues using the unified theory of game-theoretic interactions enables people to explore the essence of many existing deep-learning techniques, and this is of significant values for the future development of XAI. Therefore, this tutorial aims to bring together researchers and industrial practitioners, who concern about the interpretability, safety, and reliability of artificial intelligence. Critical discussions on the connection between the encoding of concepts and a DNN’s performance provide new prospective research directions. Thus, this tutorial is expected to profoundly influence critical industrial applications such as medical diagnosis, finance, and autonomous driving.
Speaker | Topic | Link | |
Quanshi Zhang | Theoretically Unifying Conceptual Explanation and Generalization of DNNs | slides | video |
I also invited Dr. Huiqi Deng and Dr. Wen Shen to report their new findings in XAI. | |||
Huiqi Deng | How to unify attribution explanations by interactions? | slides | |
Wen Shen | What is the relationship between interactions and visual concepts? —— Learning compositional and interpretable features | slides |
Please contact Quanshi Zhang if you have questions.