Personal details
Title | Explainable Medical Decision: Investigating a Model-Invariant Algorithm for Neural Network Attribution Maps [Master] |
Description | A central finding of preliminary research reveals that different neural network architectures, when trained on the same data distribution, generate diverse attribution maps for local explanations, supporting the assertion that attribution maps are model-dependent [2]. However, it is also understood that these attribution maps, despite their varying origins, can embody certain common characteristics [1]. Given this premise, the proposition for future research is to delve into the development of a novel algorithm that seeks to create attribution maps universally accepted by all models. These models, despite possessing diverse architectures, are based on the same data distribution. This line of enquiry will pave the way towards generating explanations that are devoid of model-dependency or model-bias, thereby privileging model-invariance. This research aims to bridge the gap between differing neural network architectures, fostering improved communication, data interpretation, and usability. Ultimately, advancements in this field have the potential to significantly propel the evolution of explainable Artificial Intelligence (AI). Contact: abdul.kadir@dfki.de Relevant literature: [1] Kadir, M. A., Addluri, G. K., & Sonntag, D. (2023). Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency. arXiv preprint arXiv:2307.02150. [2] Gupta, A., Saunshi, N., Yu, D., Lyu, K., & Arora, S. (2022). New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound. Advances in Neural Information Processing Systems, 35, 33120-33133. |
Home institution | Department of Computing Science |
Associated institutions |
|
Type of work | conceptual / theoretical |
Type of thesis | Master's degree |
Author | Ilira Hiller |
Status | available |
Problem statement | |
Requirement | |
Created | 14/12/23 |