The research group "Reliable Machine Learning" studies the properties of machine learning algorithms. In view of the recent success of deep learningmethods in applications like image recognition, speech recognition, and automatic translation, the group especially focuses on properties of deep neural networks.
Although a neural network trained e.g. for an image classification task might work well on "real inputs", it has been repeatedly shown empirically that such networks are vulnerable to adversarial examples: a minimal perturbation (impercetible to a human) of the input data can cause the network to misclassify the input. Thus, an important research area of the group is to mathematically understand the reasons for the existence of such adversarial examples (i.e., the instability of trained neural networks), and - building on that understanding - to develop improved methods that yield provably robust neural networks.
Award for Dr. Thomas Jahn and Prof. Felix Voigtlaender
The Journal of Complexity has awarded Prof. Felix Voigtlaender (Chair of Reliable Machine Learning), his colleague Dr. Thomas Jahn and Prof. Tino Ullrich from Chemnitz University of Technology with the Best Paper Award 2023.
The paper “Sampling numbers of smoothness classes via ℓ¹-minimization” was published in the Journal of Complexity in December and selected as the winner by a committee. Now on August 19, there was an award ceremony in Canada where Dr. Jahn and Prof. Ullrich received the prize from Josef Dick of UNSW.