It is widely known that policies trained using reinforcement learning (RL) to solve simulated robotics problems (MuJoCo) are

Question we address: How to develop physics-informed reinforcement learning algorithms that guarantee safety and interpretability ?

So-called Neural ODEs recently generated quite a hype in deep learning. The main idea is to formulate artificial neural networks as continuous and parametrized dynamical systems, trained using gradient descent like in deep learning. This has biological motivation, as clearly the neurons in the brain are continuous. One of the applications is to replace the deep segment of Resnet blocks with Neural ODE block, where many questions arise about the properties of the learned dynamical system.

Question we address: Study properties of dynamical systems arising from training neural ODE architectures.

It is well known that deep neural image classifiers are immune to adversarial attacks, which may have devastating consequences. The approach that we take in designing roubst image classifiers that are immune to adversarial attacks is based upon the idea of employing a symbolic algorithm within the deep learning pipeline, which will render standard white-box attacks impossible, and will make black box attacks significantly harder. One particular tool that we propose to use in this context is the topological data analysis.

Question we address: How to design robust image classifiers that are immune to adversarial attacks.

The loss landscapes of deep neural networks are known to form very complicated structures, its visualizations in low dimension bring to mind surrealist paintings. Our goal is to study the landscape properties like characterization of local minima, convergence of the gradient descent for deep networks, in particular autoencoder architectures.

Question we address: We study the optimization landscape properties of deep neural networks including autoencoders.