Center for Artificial Intelligence in Medicine (CAIM)

"Who should be accountable?"

July 2023

Christoph Ammon's PhD thesis at the Institute for Criminal Law and Criminology, University of Bern deals with the increasingly intertwined relationship between humans and machines in the wake of recent developments in AI technology. He discusses how responsibility for a decision and subsequent action can be adequately attributed, especially in medical applications, and whether it would make sense to define an AI as a functional legal entity.

Christoph Ammon thinks that the rapid development of AI requires a rethinking of many aspects of the legal responsibility of technology and humans. (© CAIM, University of Bern)

Christoph, what is your research about?
Fundamentally, it is about the extent to which current technology is shifting the interaction between humans and machines from a clear subject-object relationship (we have machines that help us perform tasks faster, more efficiently, or qualitatively better) to a constellation where human decision-making capacity is transferred or delegated to machines. Many paradigms that were previously taken for granted are currently being questioned. What happens when a machine no longer supersedes or supplements the human hand (as in the case of a drill), but rather the human mind and thus also human decision-making ability?

With big data, as in digitalized medicine, AI can be used to make the decision for a course of action. For example, it can point to a specific diagnosis or a specific way to perform a medical intervention. Furthermore, intelligent surgical technology can act more and more autonomously. The question is thus increasingly: What is still the responsibility of the human and what of the machine?

At the moment, the AI debate is picking up a lot of steam with Large Language Models. With my research, I want to contribute to sharpening the senses of what is happening here technologically, and that legal regulation is indispensable to steer this development in a socially acceptable direction.

For tricky legal and philosophical questions related to his research, Christoph likes to consult the jurisprudential library at UniS.

Why is this such a difficult topic, especially in medicine?
The licensing of medical technology is already broadly regulated: When a solution comes onto the market, it must be at least as good as a specialist in terms of sensitivity and specificity. A physician using this technology is somewhat limited in his or her decision-making ability because of knowing: statistically, the tool is more accurate than I am. This may inhibit the physician from deviating from the tool’s recommendation, in effect constituting a shift in action from the human being to the medical product. But legally, the physician bears the responsibility.

If something goes wrong, product liability law could come into place (a company can be held liable for a product defect). But if neuronal networks make decisions whose internal effects we do not understand (black box) and which continue to learn and adapt their behavior even after training, then we have a problem with access to the manufacturer. For, what caused the error? Was an incorrect data set used for training, was the AI incorrectly calibrated, or is the entire company strictly liable? Perhaps only fully explainable AI applications (XAI) should be allowed to be used in areas with high-risk decision making such as the medical field?

It would be conceivable here to define the AI as a functional legal entity. Then we would have a legal entity that is primarily liable and that could be prosecuted under civil law. For example, by means of an insurance solution financed by all parties involved: i.e., manufacturers, users or a liability fund to which payments are made in advance. This would also have a regulatory effect because high-risk AI would no longer be economically viable. Whether this would make sense in contrast to the current "risk-based approach" of the EU with its AI Act is addressed in my thesis. However, it is not primarily the civil liability question that should be answered, but the fundamental question of a possible legal status for AI, also with regards to criminal liability.

Today, technological development creates entities that can act in a legally relevant way. As early as 2017, the European Parliament proposed defining tools that perform actions independently as “electronic persons”. At that time, however, this was met with great rejection because many sensed anthropomorphism, i.e. the transfer of human characteristics to machines. But (at least the current) technology has no will or intent of its own. Therefore, analogies, if any, are more likely to be drawn with existing "artificial" legal entities such as corporations.

We need to accommodate technological developments from a legal perspective.

Where do you see the trickiest legal issues around AI?
I guess one of them is the one I'm working on. We need to have a fundamental discussion about how we stake out this very open field of the new scope of AI's social impact to preserve the primacy of humans in this technologized world. In areas where life and death are at stake, do we want to rely purely on statistics or always ensure the "human in the loop"?

Overall, political aspects are also at stake, which are difficult to separate from the legal ones. Currently, many problems arise around copyright and intellectual property law: Can an AI itself be the author of an intellectual creation? What about the copyrights of the people whose works were used for the AI's training or input? Should we have some kind of "watermarking" in data sets to distinguish what comes from a machine and what from a human?
We need to accommodate these technological developments from a legal perspective. This also involves creating trust in a world where fiction and truth are blurred.

(© CAIM, Universität Bern)

Christoph Ammon studied law at the Universities of Fribourg, Bern and the University of British Columbia, Vancouver. After completing his legal training and passing the bar exam in the Canton of Bern, he returned to the University of Bern at the end of 2020 as a doctoral student to build on the results of his master thesis. In 2024, he plans to spend a year as a visiting research student at the University of California, Berkeley.

For his dissertation with Prof. Martino Mona, Christoph is currently looking for the appropriate legal status of AI in society - specifically in the context of medical technology and medical robotics. The ARTORG Center for Biomedical Engineering Research of the University of Bern supports him with its MedTech expertise. Christoph hopes that his research will contribute to clarifying fundamental questions for an urgently needed AI regulation and thus help to integrate AI into society in a purposeful and collaborative way.