Although deep neural networks (DNNs) have achieved impressive performance in several applications, they also exhibit several well-known sensitivities and security concerns that can emerge for a variety of reasons, including adversarial attacks, backdoor attacks, and lack of fairness in classification. Hence, it is important to better understand these risks in order to roll out reliable machine-learning tools for human-facing applications, as well as more basic scientific applications, including biology and health, engineering and physics, etc. Characterizing the sensitivity and security of these models is important as these applications are often mission-critical and a potential failure can have a severe impact.
The project is organized into two main thrusts. First, we will design new robust DNN architectures by exploiting the dynamical system perspective of machine learning, which opens the opportunity to introduce ideas from scientific computing and numerical analysis. Second, we will focus on adversarial machine learning, by developing a better understanding for adversarial examples and studying the trade-offs of robust adversarial learning and the impact on the fairness of a trained model.