Bern Interpretable AI Symposium

BIAS in Medical Image Analysis, at the University of Bern, on 24 of March 2023

(Photo © Mart Production, Pexels)

When: 24 March 2023, 9.30-16.30h
Where: Kuppelraum, University of Bern, Hochschulstrasse 4

Overview

The Bern Interpretable AI Symposium (for Medical Image Analysis) is a one-day meeting (on the 24th of March 2023), aiming to bring researchers together in the medical image interpretable AI community. Our hope and objective is to attempt to “open the black box”, share insights into challenges and breakthroughs in this field and to foster better interaction with each other.

We plan to operate in a hybrid mode: both online and in-person, with an emphasis on accessibility to early-career researchers and folks who are new to this and interested to get involved in this field. To facilitate this, we have lined up a series of invited talks from all three major stakeholders: industry, academia, and clinics, to give an overview of recent advances, challenges, and efforts for sharing insights.

Call for Abstracts

We invite submissions of extended abstracts for oral and poster presentation during the symposium. Submitting an abstract is great way to engage with the interpretable AI in medical imaging community and to showcase your research. Submitted work can be preliminary and work-in-progress, and we welcome perspectives and position papers as well to foster discussions about recent trends in this niche space. More details on deadlines below.

Motivation

Although the “AI model” boom in recent years has generated promising results in mission-critical fields like medical imaging, interpretability is a major question mark. The understanding of medical images by researchers in academia, industry and clinics could be different, and we believe it requires a common language and terminology for obtaining precise and safe inferences.
Developing novel tools in computer aided diagnosis, therapy and intervention depends critically on our ability to explain their behavior, assign a level of accountability, and BIAS attempts to bring these questions to the fore, in an unbiased manner. The symposium aims to raise awareness amongst all the stakeholders of the unmet needs in interpretability for successful deployment of AI in medical imaging.

Contact the Organizers

Please reach out to Amith Kamath, Yannick Suter or Mauricio Reyes.

Preliminary progam
Time Subject Speaker
09.00h Registration
09.30h Introduction and welcome Amith Kamath, Uni Bern
09.45h Machine learning interpretability - Road to mastery: Tutorial Dr. Mara Graziani, HES-SO Valais-Wallis & IBM Research, Zurich
10.15h Coffee / Refreshments Foyer
10.30h Keynote: The need for interpretability in clinical decision support Prof. Dr. Henning Müller, HES-SO Valais-Wallis
11.30h Explainable AI (XAI) for medical applications with MATLAB Dr. Christine Bolliger and Dr. Res Joehr, MathWorks
12.00-13.00h Lunch Break Mensa Gesellschaftsstrasse
13.00h Poster presentations - I Dr. Yannick Suter, Uni Bern
14.00h Coffee / Refreshments
14.15h Keynote: The day 2 problem for medical imaging AI

Dr. Matt Lungren, Nuance Communications, a Microsoft Company

14.45h Poster presentations - II Dr. Yannick Suter, Uni Bern
15.45h Keynote: Practicing Safe Rx: The importance of intelligible machine learning in healthcare Dr. Rich Caruana, Microsoft Research
16.45h Panel Discussion Dr. Mara Graziani, Dr, Rich Caruana, Dr. Mohamed Anas, and Dr. Lisa Koch
Moderation: Prof. Dr. Mauricio Reyes
17.15-18.00h Closing & Apero Amith Kamath, Uni Bern

Machine Learning Interpretability: Road to Mastery

Interpreting complex machine learning models can be difficult. There is a plethora of existing methods, but their meaningfulness and reliability are still hard to evaluate. Moreover, depending on the purpose (debugging, ...), a technique in the literature is often more appropriate than others. How do we choose the best approach in the landscape of the existing techniques? This talk is organized as a virtual "walk" through different techniques, from building glass-box, transparent models to black-boxes. In the road to mastery, we will cover both standard approaches and latest research outcomes.

Dr. Mara Graziani

Mara Graziani is a postdoctoral researcher at IBM Research Zurich and at two applied universities in Switzerland: ZHAW and HES-SO Valais. She obtained a Ph.D. in Computer Science from the University of Geneva in December 2021. Her research aims at using interpretable deep learning methods to support and facilitate knowledge discovery in bio-medical research. During her Ph.D., she was a visiting student at the Martinos Center, part of the Harvard Medical School in Boston (MA) to focus on the interaction between clinicians and deep learning systems.
Coming from a background of IT Engineering, she was awarded the Engineering Department Award for completing the M.Phil. in Machine Learning, Speech and Language Technology at the University of Cambridge (UK) in 2017.

The Need for Interpretability in Clinical Decision Support

Classical machine learning using handcrafted features or decision trees that were used in clinical decision support could be interpreted by design. When deep neural networks led to much better results in many tasks but as black box models it became clear that a machine learning decision without any explication can hardly be integrated with the way clinical work such as diagnosis or treatment planning is done. Interpretability/explainability has become a major challenge for using any tool in clinical practice. The presentation will start with the basic challenges of systematic medical data analysis and go towards integrating explainable AI into modern solutions of digital medicine.

Prof. Dr. Henning Müller

Henning Müller is titular professor in medical informatics at the University Hospitals of Geneva and professor in business informatics at the HES-SO Valais where he is responsible for the eHealth unit. He studied medical informatics at the University of Heidelberg, Germany, then worked at Daimler-Benz research in Portland, OR, USA. He carried out a research stay at Monash University, Melbourne, Australia in 2001 and in 2015-2016 was a visiting professor at the Martinos Center in Boston, MA, USA part of Harvard Medical School and the Massachusetts General Hospital (MGH) working on collaborative projects in medical imaging and system evaluation among others in the context of the Quantitative Imaging Network of the National Cancer Institutes. He has authored over 400 scientific papers, is in the editorial board of several journals and reviews for many journals and funding agencies around the world.

Explainable AI (XAI) for Medical Applications with MATLAB

In recent years, artificial intelligence (AI) has shown great promise in medicine and medical device applications. However, strict interpretability and explainability regulatory requirements can make it prohibitively difficult to use AI-based algorithms for medical applications. To overcome these limitations, interpretable machine and deep learning techniques and algorithms have been developed to assess if a model behaves as expected or needs further development and training.
In this talk, methods will be highlighted that help explain the predictions of deep neural networks applied to medical images such as MRI and X-Ray. You will learn about the interpretability methods readily available in MATLAB, such as occlusion sensitivity and gradient-weighted class activation mapping (Grad-CAM). We will illustrate how these methods can be applied in an interactive way using a MATLAB app-based [1] and command line workflows. Further, these methods will be put in the context of the complete AI workflow by showing an example where we develop an image segmentation network for cardiac MRI images using Grad-CAM [2].
[1] Explore Deep Network Explainability Using an App, GitHub.com.
[2] Cardiac Left Ventricle Segmentation from Cine-MRI Images using U-Net, MATLAB Documentation.

Dr. Christine Bolliger

Christine Bolliger is a senior application engineer at MathWorks in Bern (Switzerland), supporting customers across different industries in the areas of software engineering, data science and cloud computing. She holds master's degrees in Physics and in Computational Science and Engineering and has a PhD degree in Biomedical Sciences. Before joining MathWorks, she worked as a software engineer and leader of a data science team.

The Day 2 Problem for Medical Imaging AI

Despite growing interest in artificial intelligence (AI) applications in medical imaging, there are still barriers to widespread adoption. One key issue is the lack of tools to monitor model performance over time, which is important because model performance can degrade in various scenarios. This talk will propose a system to address this issue, which relies on statistics, deep-learning, and multi-modal integration and will describe how this approach will allow for real-time monitoring of AI models in medical imaging without the need for ground truth data.

Dr. Matt Lungren

Matt Lungren is Chief Medical Information Officer at Nuance Communications, a Microsoft Company. As a physician and clinical machine learning researcher, he maintains a part-time interventional radiology practice at UCSF while also serving as adjunct faculty for other leading academic medical centers including Stanford and Duke.
Prior to joining Microsoft, Dr Lungren was an interventional radiologist and research faculty at Stanford University Medical School where he led the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI). More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare, focusing on business development for clinical machine learning technologies in the public cloud.

Practicing Safe Rx: The Importance of Intelligible Machine Learning in HealthCare

In machine learning often tradeoffs must be made between accuracy and intelligibility: the mostaccurate models usually are not very intelligible, and the most intelligible models usually are less accurate.  This can limit the accuracy of models that can safely be deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust models is important.  EBMs (Explainable Boosting Machines) are a recent learning method based on generalized additive models (GAMs) that are as accurate as full complexity models, more intelligible than linear models, and which can be made differentially private with little loss in accuracy.  EBMs make it easy to understand what a model has learned and to edit the model when it learns inappropriate things.  In the talk I’ll present several case studies where EBMs discover surprising patterns in medical data that would have made deploying black-box models risky.

Dr. Rich Caruana

Rich Caruana is a senior principal researcher at Microsoft Research. Before joining Microsoft, Rich was on the faculty in the Computer Science Department at Cornell University, at UCLA’s Medical School, and at CMU’s Center for Learning and Discovery. Rich’s Ph.D. is from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon. His thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning. Rich received an NSF CAREER Award in 2004 (for Meta Clustering), best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), co-chaired KDD in 2007 (with Xindong Wu), and serves as area chair for NIPS, ICML, and KDD. His current research focus is on learning for medical decision making, transparent modeling, deep learning, and computational ecology.

Panel discussion

Dr. Mara Graziani, HES-SO Valais-Wallis and Research Scientist at IBM Research, Switzerland 

Dr. Rich Caruana, Senior Principal Researcher, Microsoft Research 

Dr. Mo Anas, Engineering Group Manager, The MathWorks 

Dr. Lisa Koch, Group Leader for "Machine Learning for Medical Diagnostics", Werner Reichardt Centre for Integrative Neuroscience (CIN), Institute for Ophthalmic Research 

Moderation: Prof. Dr. Mauricio Reyes, ARTORG Center for Biomedical Engineering Research, University of Bern

Registration is closed.

Call for Extended Abstracts

Acceptance and selection of orals and posters will be based on how well it matches the theme of the symposium and decisions will be made by the organizers. We will have a light peer-review with an objective score, however, written review feedback cannot be shared. Depending on the number and quality of abstracts we receive, we plan to invite selected submissions to write an extended version of their work to be submitted to a special edition of a journal (this is still in the works). Please indicate your interest in this special edition on the CMT submission form.

Submission

Abstract submissions are due March 10th, AoE (Click here to submit)

Authors will be notified.

Format

Abstracts are strictly one page long (a second page may only contain the references) and we recommend using this template. Abstracts must be submitted in PDF format, and the review will be single-blind. We expect a brief description of the work including context, methodology and (potentially preliminary) results.

Poster Format

Authors of accepted abstracts are please requested to email a 4 minute video describing their work to the organizers (Amith, Yannick and Prof. Dr. Reyes) by the 21st of March, 2023. Please reach out to the organizers for any questions about this.

The symposium is hosted at the University of Bern, Switzerland. The entire event will be at the Cupola Room (Kuppelraum) in the main building of the University of Bern. The room is on the 5th floor. See below for a map link to get to this building.

Kuppelraum, Universität Bern, Hochschulstrasse 6, 5. Stock (© University of Bern)