The research group "Reliable Machine Learning" studies the properties of machine learning algorithms. In view of the recent success of deep learningmethods in applications like image recognition, speech recognition, and automatic translation, the group especially focuses on properties of deep neural networks.
Although a neural network trained e.g. for an image classification task might work well on "real inputs", it has been repeatedly shown empirically that such networks are vulnerable to adversarial examples: a minimal perturbation (impercetible to a human) of the input data can cause the network to misclassify the input. Thus, an important research area of the group is to mathematically understand the reasons for the existence of such adversarial examples (i.e., the instability of trained neural networks), and - building on that understanding - to develop improved methods that yield provably robust neural networks.
We, the MIDS Team are very impressed by the Hackathon initiatives of our Data Science students. To support this endeavor and other topics involving programming (e.g., working on group projects), we obtained from February 1 until April 25 the Room GEOG-008 as a Hackerspace, for use by students.
Please leave the room in great condition, put the furniture into the right place and do not leave waste around.