AIC: Explainable Visual Recognition with Minimal Supervision
Donnerstag, 09.12.2021, 17.00 Uhr
Im November 2021 startet eine neue Veranstaltungsreihe des RWTH KI-Centers: das “Artificial Intelligence Colloquium”, kurz AIC. Hier präsentieren renommierte Wissenschaftler der RWTH und anderer Universitäten hochaktuelle Forschung zu Methoden und Anwendungen der Künstlichen Intelligenz.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image properties which justify visual predictions. In this talk, I will present my past and current work on Explainable Machine Learning combining vision and language where we show (1) how to learn simple and compositional representations of images focusing on discriminating properties of the visible object, jointly predicting a class label, explaining why/not the predicted label is chosen for the image, (2) how to evaluate the effectiveness of these explanations on the zero-shot learning task, and (3) how to improve the explainability of deep models via conversations.
Zeynep Akata is a professor of Computer Science (W3) within the Cluster of Excellence Machine Learning at the University of Tübingen, a senior researcher at the Max Planck Institutes for Informatics and for Intelligent Systems. After completing her PhD at the INRIA Rhone Alpes with Prof Cordelia Schmid (2014), she worked as a post-doctoral researcher at the Max Planck Institute for Informatics with Prof Bernt Schiele (2014-17) and at University of California Berkeley with Prof Trevor Darrell (2016-17). Before moving to Tübingen in October 2019, she was an assistant professor at the University of Amsterdam with Prof Max Welling (2017-19). She received a Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014, a Werner-von-Siemens-Ring young scientist honour in 2019, an ERC Starting Grant in 2019 and the DAGM German Pattern Recognition Award in 2021. Her research interests include multimodal learning and explainable AI.