NOBIM Conference 2025 is approaching very soon! It will be held in NTNU Gjøvik Campus on November 6th and 7th. We have two very exciting keynote speakers from the fields of Medical Imaging and Human-Computer Interaction. The complete program can be found here.
And don't forget to register today! 🙂
Photo credit: Espen Taftø Vestad/NTNU
By Kristoffer Wickstrøm
Explainable artificial intelligence (XAI) is the field of research that seeks to answer the question of "why?" Why does the algorithm think an image contains a certain object? Why did it make a mistake on this particular example? Why does it disagree with an experienced physicians for a particularly challenging illness to diagnose? Such questions will arise in almost all scenarios where automatic support systems are part of the decision process, and particularly in a safety critical domain such as healthcare. Answering the question of "why?" is crucial to create trustworthy, reliable, and informative automatic decision systems. The vast majority of XAI research have been focused on explaining scores and predictions, but no methods are designed for explaining representations of data. With the advance of general purpose foundation models that produce high quality representations of data without a particular downstream task in mind, understanding and explaining representations of data is becoming increasingly important.
In my research, I have developed the first XAI representation learning framework, which is entitled RELAX [1]. RELAX allows user to visualise what input parts are important for the representation of that particular input. For example, we show how RELAX can be used to explain and visualise what features are encoded using a classic histogram of oriented gradients approach. We have also demonstrated how understanding representations can offer key insights in the context of self-supervised learning for CT liver images [2]. Most recently, we have built on the RELAX framework to design uncertainty-aware explanations of representations [3].
References.
[1] K. Wickstrøm et al. “Relax: Representation learning explainability”, IJCV, 2023.
[2] K. Wickstrøm et al. “A clinically motivated self-supervised approach for content-based image retrieval of CT liver images”, CMIG, 2023.
[3] K. Wickstrøm et al. “REPEAT: Improving Uncertainty Estimation in Representation Learning Explainability” AAAI, 2025.
Get in touch with us at styret@nobim.no should you want your research to be featured.