The research group "Reliable Machine Learning" studies the properties of machine learning algorithms. In view of the recent success of deep learningmethods in applications like image recognition, speech recognition, and automatic translation, the group especially focuses on properties of deep neural networks.
Although a neural network trained e.g. for an image classification task might work well on "real inputs", it has been repeatedly shown empirically that such networks are vulnerable to adversarial examples: a minimal perturbation (impercetible to a human) of the input data can cause the network to misclassify the input. Thus, an important research area of the group is to mathematically understand the reasons for the existence of such adversarial examples (i.e., the instability of trained neural networks), and - building on that understanding - to develop improved methods that yield provably robust neural networks.
The various methods, some of them new, innovative and experimental, and their frameworks, limitations and implementation possibilities were discussed. Subsequently, it was discussed which approaches to modeling are conceivable. The group would like to publish the gained knowledge in a joint paper.