I am interested in creating safe Machine Learning programmes. This includes looking into the techniques of formally verifying properites of Deep Neural Networks and on safe Reinforcement Learning.
Currently, I have several projects going on: Abstraction of Neural Networks (smarter than in my thesis), using machine learning (esp. Decision Trees) for better (faster or more explainable) strategy generation on MDPs, explainable and certifiable regression models, and improvement of reinforcement learning via updates.
I am always happy to collaborate with students. If you're interested in some of the ongoing projects or if you have any other cool idea, feel free to contact me. I'll be happy to meet you.Pranav Ashok, Vahid Hashemi, Jan Křetínský, Stefanie Mohr. DeepAbstract: Neural Network Abstraction for Accelerating Verification. Accepted at ATVA 2020. (pre-print, link)
Vahid Hashemi, Jan Křetínský, Stefanie Mohr, Emmanouil Seferis. Gaussian-based runtime detection of out-of-distribution inputs for neural networks. Accepted at RuntimeVerification 2021. (link, PDF)
Stefanie Mohr, Konstantina Drainas, Jürgen Geist. Assessment of Neural Networks for Stream-Water-Temperature Prediction. Accepted at ICMLA 2021. (pre-print, link)
Krishnendu Chatterjee, Joost-Pieter Katoen, Stefanie Mohr, Maximilian Weininger, Tobias Winkler. Stochastic games with lexicographic objectives. Accepted at Formal Methods 2023. (link)
Calvin Chau, Jan Křetínský, Stefanie Mohr. Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks. Accepted at ATVA 2023. (link)
Konstantina Drainas, Lisa Kaule, Stefanie Mohr, Bhumika Uniyal, Romy Wild, Juergen Geist. Predicting stream water temperature with artificial neural networks based on open-access data. Accepted at Hyrdological Processes 2023. (link)