direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Grégoire Montavon

Lupe

  • Research Associate | TU Berlin
    Junior Research Group Lead | BIFOLD

Biography

Grégoire Montavon is a Research Associate in the Machine Learning Group at the Technische Universität Berlin, and Junior Research Group Lead in the Berlin Institute for the Foundations of Learning and Data (BIFOLD). He received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009, and a Ph.D. degree in Machine Learning from the Technische Universität Berlin in 2013. His research interests include explainable AI, deep neural networks, and unsupervised learning.

Research on Explainable AI

Grégoire Montavon's current research is on advancing the foundations and algorithms of explainable AI (XAI) in the context of deep neural networks. One particular focus is on closing the gap between existing XAI methods and practical desiderata. Examples include using XAI to build more trustworthy and autonomous machine learning models and using XAI to model the behavior of complex real-world systems so that the latter become meaningfully actionable.

See also: https://bifold.berlin/research/#dnn

Research Highlights

  • The Layer-Wise Relevance Propagation (LRP) method. LRP robustly and efficiently explains deep neural network predictions in terms of input features.
  • The Deep Taylor Decomposition framework, which connects mathematically the LRP procedure to Taylor expansions and leads to a systematic way of designing LRP propagation rules.
  • The "Neuralization-Propagation" framework for explaining non-neural network models, which consists of rewriting non neural network models (e.g. one-class SVMs, K-means) as strictly equivalent neural networks, and using the neural network representation and LRP to produce explanations.
  • Higher-order extensions of LRP (BiLRP and GNN-LRP). They enable to identify joint feature contributions in models such as graph neural networks or deep similarity models.
  • Method to systematically verify that trained neural networks predict as expected and are not subject to a Clever Hans effect.

These contributions are described in more details in our recent review paper on Explainable AI.

See also: www.heatmapping.org

Teaching

Winter semester 2021/2022:

Summer semester 2021:

Publications

Preprints

 Edited books

 Book chapters

Journal publications

Conference Publications

Software Publications

Workshop Publications

Theses

Unpublished

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions