© Astrid Eckert / TU Munich

Institute for Ethics in Artificial Intelligence presents research projects

The Institute for Ethics in Artificial Intelligence at Technische Universität München (TUM) has begun its work. Initial research projects were presented at the official opening. All have one thing in common: they operate at the interface of ethics and artificial intelligence (AI).

Since 2012, TUM has been researching the interactions between science, technology, and society with the Munich Center for Technology in Society (MCTS), established as part of the Excellence Initiative 2012. As part of the MCTS, the TUM Institute for Ethics in Artificial Intelligence (IEAI) will focus on the ethical implications of artificial intelligence. The institute is supported by Facebook with 6.5 million eurosAccording to TUM, however, this financial support is not subject to any conditions or expectations from the US company.

People's values, needs and expectations

With the IEAI, TUM aims to bring together its scientific and technical disciplines with the humanities and social sciences to make AI-based technologies trustworthy and socially acceptable.

“As a technical university, we can only effectively contribute to social progress if we align our technological innovations with people’s values, needs, and expectations.”

said Thomas Hofmann, the new president of TUM.

Therefore, at the IEAI, talented researchers from medicine, natural sciences, and engineering collaborate with social and ethical sciences in interdisciplinary teams. The new institute, which has received funding of approximately €2.3 million, will launch the following research projects:

Ethics of autonomous driving

Ethical theories will be translated into algorithms and integrated into programs to adapt the direction and speed of autonomous vehicles to current situations, for example, to make ethically justifiable decisions in the event of an unavoidable collision with humans. The enhanced programs will be tested and evaluated in a simulator using familiar and new scenarios.

AI-based decision support for ethical questions in everyday clinical practice

This project investigates whether it is possible and useful to use machine learning approaches to assist physicians in making important decisions in everyday clinical practice, for example, when deciding for or against a medication.

Understanding the dynamics of hate speech and fake news

Negative information such as "hate speech" or "fake news" sometimes spreads like wildfire on social media. The platforms' respective AI algorithms likely play a crucial role in this. Therefore, the dynamics of such opinion formation are now to be mathematically modeled in order to better understand them and explore questions of responsibility for such "wildfires" and possible control mechanisms.

AI-supported targeting of perpetrators on social media

The project investigates whether AI-automated, personalized interventions can be used to encourage perpetrators to change their behavior when spreading fake news. Ethical and psychological issues, as well as data protection and privacy issues, are being examined.

Trust in AI through regulation

The project examines proposed solutions for the regulation and certification of AI-based programs based on their social and technical feasibility. This will lead to the development of concrete recommendations for action for society and policymakers, as well as technological requirements, to give society more control and trust.

AI for human-centric Industry 4.0

The data collected during the operation of networked manufacturing facilities (Industry 4.0) enables the optimization of processes in real time. However, this can create a feeling of constant surveillance for workers. The project examines ethically problematic aspects of the use of AI and attempts to develop algorithms that optimize production processes for the working person, with their strengths, weaknesses, and needs, rather than subordinating people to the technical needs of the production processes.

read more ↓