CAIM Research Symposium 2022

(Photo: © Tara Winstead, Pexels)

When: 24 November 2022, 8.30-13.00h
Where: Kuppelsaal, University of Bern, Hochschulstrasse 4

This year’s research symposium features updates on the CAIM funded projects, an overview over the most important developments during this year as well as invited keynotes on AI for medicine chances and challenges.
Meet our Executive team, our ethics specialists and the people behind our initiative DAIM!

The Symposium takes place in person and includes a short standing lunch to facilitate networking.

Time Subject Speaker
08.30h Opening Address Raphael Sznitman, AI in Medical Imaging, ARTORG Center & Director CAIM
08.40h Introduction Claudio Bassetti, Director of Neurology & Dean Medical Faculty, University of Bern
08.45h DAIM, an initiative for more diversity in AI research Inti Zlobec, Head Digital Pathology & DAIM committee
08:50h Ethics @CAIM, scope of the CAIM Ethics Lab Claus Beisbart, Institute of Philosophy & CAIM Ethics Lab
09:05h Keynote 1:
How Explainability Contributes to Trust in AI
Andrea Ferrario, ETH Zurich , Group of Technology Marketing and Mobiliar Lab for Analytics at ETH
Moderation: Rouven Porz
09:45h Coffee Break (Foyer)


10.00-11.30h CAIM funded research projects 2022:
Current status and first results
Christoph Gräni / Yasaman Safarkhanlo
Daniel Fuster / Rémy Bruggmann
Richard McKinley / Piotr Radojewski
Tobias Nef / Stefan Klöppel
Petra Stute / David Ginsbourger

Moderation: Stavroula Mougiakakou

11.30h Stand-up Lunch (Foyer)
12.00h Keynote 2:
Deep Learning Medical Image Analysis in Radiology: myths, realities and how to make it work for you
Leo Joskowicz, Director CASMIP Lab: Computer-aided Surgery and Medical Image Processing, The Hebrew University of Jerusalem, Israel
Moderation: Mauricio Reyes
12.40h CAIM Awards / Closing Remarks Raphael Sznitman
13.00h End of Symposium

"How Explainability Contributes to Trust in AI"

Dr. Andrea Ferrario
ETH Zurich and Mobiliar Lab for Analytics at ETH

Abstract

I provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it during its use. I discuss the latter by referencing an account of trust, called “trust as anti-monitoring,” that different authors contributed developing. I propose that “explainability fosters trust in AI” if and only if it fosters justified and warranted paradigmatic trust in AI, i.e., trust in the presence of the justified belief that the AI is trustworthy, which, in turn, causally contributes to rely on the AI in the absence of monitoring. Focusing on the use case of medical AI systems, I argue that the proposed approach can intercept the complexity of the interactions between physicians and medical AI systems in clinical practice, as it can distinguish between cases where humans hold different beliefs on the trustworthiness of the medical AI and exercise varying degrees of monitoring on them. Finally, applying the account to user’s trust in AI, I argue that explainability does not contribute to trust. By contrast, when considering public trust in AI as used by a human it is possible for explainability to contribute to trust.

(courtesy of)

Andrea Ferrario holds a Ph.D. degree in mathematics from ETH Zurich. He has worked in industry as a data scientist for five years before his return to ETH. Since then, he has been a Postdoctoral Researcher at the Chair of Technology Marketing and the Scientific Director of the Mobiliar Lab for Analytics at ETH Zurich. His research interests lie at the intersection of philosophy and technology, with a focus on AI and mixed reality. They comprise the ethics and epistemology of AI, the use of natural language processing and machine learning for digital health interventions, and the use of immersive augmented reality to solve problems on the interpretability of machine learning models collaboratively.

"Deep Learning Medical Image Analysis in Radiology:
myths, realities and how to make it work for you"

Prof. Dr. Leo Joslowicz
CASMIP Lab, The Hebrew University of Jerusalem, Israel

Abstract

Radiology, one of the cornerstones of modern healthcare, is undergoing rapid and profound changes due to the ever-increasing number of imaging examinations, the shortage of certified radiologists, the dynamics of healthcare economics, and the technological developments of artificial intelligence-based image processing. Deep learning has been adopted as the solution of choice for a variety of clinical applications. However, deep learning presents significant challenges and requires the consideration of the ecosystem around it. In this talk, we will discuss the myths and realities of deep learning medical analysis in Radiology. We will focus on three key aspects: 1) how to quantify and incorporate task-specific observer variability and measurements uncertainty when establishing the clinical goal of the analysis; 2) how to accelerate and make robust deep learning with very few annotated datasets, and; 3) what pipeline is required to enhance deep learning networks performance. We will illustrate these issues and present our methods with a variety of examples from our recent works on fetal MRI, abdominal CT, and OCT analysis. 

(courtesy of)

Leo Joskowicz is a Professor at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel since 1995. He is the founder and director of the Computer-Aided Surgery and Medical Image Processing Laboratory (CASMIP Lab).  Prof. Joskowicz is a Fellow of the IEEE, ASME, and MICCAI (Medical Image Processing and Computer Aided Intervention) Societies.  He is the past President of the MICCAI Society, was the Secretary General of the International Society of Computer Aided Orthopaedic Surgery (CAOS) and of the International Society for Computer Assisted Surgery (ISCAS). He is the recipient of the 2010 Maurice E. Muller Award for Excellence in Computer Assisted Surgery by the International Society of Computer Aided Orthopaedic Surgery and the 2007 Kaye Innovation Award.  He has published two books and over 270 technical works including conference and journal papers, book chapters, and editorials and has 14 issued patents. He is on the Editorial Boards of several journals, including Medical Image Analysis, Int. J. of Computer Aided Surgery, and Computer Aided Surgery and has served on numerous related program committees.

Registration

Registration is closed as we have reached the maximum number of participants. Thank you for your comprehension.