The research group "Reliable Machine Learning" studies the properties of machine learning algorithms. In view of the recent success of deep learningmethods in applications like image recognition, speech recognition, and automatic translation, the group especially focuses on properties of deep neural networks.
Although a neural network trained e.g. for an image classification task might work well on "real inputs", it has been repeatedly shown empirically that such networks are vulnerable to adversarial examples: a minimal perturbation (impercetible to a human) of the input data can cause the network to misclassify the input. Thus, an important research area of the group is to mathematically understand the reasons for the existence of such adversarial examples (i.e., the instability of trained neural networks), and - building on that understanding - to develop improved methods that yield provably robust neural networks.
SampTA is back! This year’s Sampling Theory and Applications conference will be held at Yale University from July 10-14th, 2023.
As chair of the SampTA steering committee, Prof. Götz Pfander is co-organizing the "International Conference on Sampling Theory and Applications" July 10-14, 2023 at Yale!
Abstract submission deadline: by the end of Friday March 10, 2023, Anywhere on Earth.
Full paper submission and co-author registration deadline: Friday March 17, 2023 AoE