direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

UMI LAB

Artificial intelligence (AI) has spread rapidly in both science and industry in recent years. Outstanding achievements have been achieved with the help of AI methods in a wide variety of areas, such as autonomous driving or cancer cell diagnostics. Despite these astonishing achievements, there is a risk that decision-making systems will make incorrect predictions, especially in safety-critical areas. The associated risks raise legal, social, and ethical questions. However, since the use of AI has become competitive in many areas, the highly complex systems must be better understood in order to guarantee the algorithmic accountability of AI technologies and thus enable their widespread use.

The field of Explainable Artificial Intelligence (XAI), which deals with the understanding of AI models and their decisions, is pioneering. Individual decisions can already be made comprehensible by, for example, marking the relevant areas in an image that made a significant contribution to the decision of the AI system. To date, however, there is no method that enables a holistic understanding of the behavior of an AI model. However, in order to be able to use AI models in a trustworthy manner, the model must be transparent such that the behavior of the model is known before it is used and potentially incorrect behavior can be excluded.

The aim of the project “Explaining 4.0” is to develop methods that make a significant contribution to a holistic global understanding of AI models. The essential key elements thereby are efficiency (through a priori knowledge), comprehensibility (through semantic elements), and robustness.

https://www.softwaresysteme.pt-dlr.de/media/content/Projektblatt_Explaining_01IS20055.pdf

Marina M.-C. Höhne

UMI lab head

marina.hoehne@tu-berlin.de

Marina M.-C. Höhne (née Vidovic) received her masters degree in Technomathematics from the Technical University of Berlin in 2012. Afterwards she worked as a researcher at Ottobock in Vienna, Austria, on time series data and domain adaptation for controlling prosthetic devices. In 2014 she started her PhD on explainable AI and received the Dr. rer. nat. degree from the Technical University of Berlin in 2017. After one year maternity leave, she continued working at the machine learning chair at TU Berlin as a postdoctoral researcher. 

In 2020 she started her own research group UMI Lab dealing with explainable artificial intelligence at the Technical University of Berlin. Moreover, since 2021 she is a junior fellow at the Berlin Institute for Foundations of Learning and Data and a fellow of the ProFiL excellence program.

Since 2021 she has a secondary employment as Associate Professor at the Arctic University of Norway.

Google Scholar profile here

Kirill Bykov

PhD student
kirill.bykov@campus.tu-berlin.de

Kirill Bykov received his Master's double degree in Computer Science at the Technical University of Berlin as part of the EIT Digital Data Science double degree program with the Eindhoven University of Technology. Previously, Kirill earned his Bachelor's degree in Applied Mathematics from Saint-Petersburg State University. KB is interested in Deep Learning, Bayesian Learning, and Explainable AI. Currently, he is pursuing his Ph.D. at the Department of Machine Learning at TU Berlin and is a part of the UMI Lab research group. By 2021 he is a fellow of the BIFOLD graduate school.

Google scholar profile

Philine Lou Bommer

PhD Student

 

 

Philine Bommer received her Master's degree in Physics at University of Heidelberg in 2020 and in the following worked as a research assistant at ZI Mannheim. Previously, Philine earned her Bachelor's degree in Physics at University of Heidelberg. Philine is interested in Deep Learning, Explainable AI, scientific data evaluation and environmental science. Currently, she is pursuing her PhD at the Department of Machine Learning at TU Berlin and is a part of the UMI Lab research group.

Google Scholar

Anna Hedström

PhD Student

anna.hedstroem@tu-berlin.de

Anna Hedström is currently pursuing her PhD at the Department of Machine Learning at TU Berlin and is a part of the UMI Lab research group. AH received her Master’s degree in Machine Learning at Royal Institute of Technology (KTH) and studied engineering in her bachelors at University College London (UCL). Her current research interests include Deep Learning, Explainable AI (XAI) and in particular, Evaluation of XAI methods. Previously, AH worked as a data scientist, held teaching- and research assistantship positions and interned at companies like Bosch, BCG and other ML start-ups. She is also a co-organizer of Women in Machine Learning and Data Science (WiMLDS) meet-ups in Berlin.

Google Scholar

Dennis Grinwald

Student research assistant

 

Dennis Grinwald is currently pursuing a Masters degree in Computer Science at Technical University of Berlin (TUB). Besides this, he works as a student research assistant with the Understandable Machine Intelligence Lab (UMI Lab) at TUB. His current research interests include Bayesian machine learning, explainable machine learning, and deep learning theory.

Laura Kopf

Student research assistant

Laura Kopf is currently enrolled in the Master's program Cognitive Systems at the University of Potsdam and is working as a student research assistant at UMI Lab at the Technical University of Berlin. She earned a Bachelor's degree in Philosophy, Linguistics, and History at the Free University of Berlin. Prior to that, she received a Bachelor's degree in Media Culture at the Bauhaus University Weimar. Her research interests include Deep Learning, Natural Language Processing, and Ethics of AI.

Dilyara Bareeva

Student research assistant


Dilyara Bareeva is currently pursuing a Bachelor’s degree in Computer Science at the Technical University of Berlin and working as a student research assistant at the Understandable Machine Intelligence Lab. Previously, Dilyara earned her Master’s degree in Economics and Management Science at Humboldt University of Berlin and a Bachelor’s degree in Economics at the Moscow State Institute of International Relations (MGIMO). She spent multiple years working as a data scientist at a consulting firm in Berlin. Her current research interests include Deep Learning, Explainable AI and Climate AI.

 

Open Master/Bachelor Thesis Topics

Gradient-based Explanation for ConvLSTMs - Explanations including Space and Time Domain (2nd order/non-linear deep taylor decomposition)
philine.l.bommer@tu-berlin.de
Normalizing flows for physical mapping of relevance scores
philine.l.bommer@tu-berlin.de
Explaining Causal Neural Networks
philine.l.bommer@tu-berlin.de
Revisiting sanity checks: critical examination of empirical XAI evaluation procedures 
anna.hedstroem@tu-berlin.de
Self-explaining networks: devising a multi-objective optimisation approach for higher quality explanations
anna.hedstroem@tu-berlin.de
Learning from randomness: what can explanations of random models teach us?
anna.hedstroem@tu-berlin.de


Papers from UMI-lab

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe

MAR Building
Room 4.013 - 4.011
Marchstraße 23
10587 Berlin

 

Follow us on Twitter

twitter.com/TUBerlin_UMI