Cyber-physical systems (CPS) are engineered systems which have a tight integration of computation and physical components. Artificial intelligence, especially with the development of effective machine learning algorithms, has enabled cyber-physical systems to sense the world with increasingly complex sensing modalities.
This has resulted in enhanced capabilities in systems such as smart medical devices and unmanned vehicles in air, land and sea—in addition to novel infrastructural changes such as smart grid and industry 4.0.
Interestingly, almost all such systems are safety-critical, where errors can have catastrophic consequences on human lives. The use of AI techniques, especially deep neural networks, have made it harder to catch errors let alone prove correctness of such systems.
In this talk I will discuss some of my recent research on verifying properties of deep neural networks (DNN), when used as control policies in safety-critical, cyber-physical systems. In addition to this, I will discuss techniques which have been shown to be effective in building correct by construction neurosymbolic models called memory consistent neural networks.
Here memories refer to a subset of the training data. Such AI models can learn from data while at the same time respecting properties such as bounded-width and adherence to scientific knowledge. We show sound theoretical properties in the context of imitation learning and approximation bounds for such models. In case of higher dimensional input sensors like camera and LiDAR, robustness to distribution shifts is often a major concern.
To this end, I will discuss some of my recent efforts which use a similar idea of memories, to build effective anomaly detectors and robust classifiers. This has been shown to be especially useful in the medical domain, where it is often important to incorporate expert inputs from clinicians. Finally, I will end with some future directions which stem from this line of work with an aim to achieve trustworthy AI in the context of cyber-physical systems.
Souradeep Dutta, Ph.D. is currently a postdoctoral researcher at the University of Pennsylvania in the Department of Computer and Information Science and works with Professor Insup Lee. He received his doctorate from the University of Colorado at Boulder for his thesis “Verification of Neural Networks” under the supervision of Professor Sriram Sankaranarayanan. He received his bachelor’s in engineering from Jadavpur University, Kolkata, India.
His research interests are in the areas of formal methods for verification of deep neural networks, and techniques for safety assurance of learning based autonomous systems used in medical and robotic applications.