We aim to develop theoretical foundations and practical algorithms for safe learning-enabled systems. Our group is focused on designing and deploying mathematically rigorous and algorithmically efficient solutions to formally verify the correct behavior of complex, safety-critical, learning-enabled decision-making algorithms. We leverage tools from machine learning, control theory, and automated reasoning to build provably correct autonomous systems, smart cities, and other complex cyber-physical systems that can operate in unknown and adversarial environments.

(1) Verified AI and
AI Alignment

As AI rapidly transforms technologies and human activities, both online (e.g., algorithmically- mediated decisions and/or generated content) and in the real world (e.g., AI-driven Cyber Physical Systems), there are opportunities and challenges for individuals and the society as a whole. Engineering AI so as to align with human values and serve society (a.k.a., “The Alignment Problem”), is arguably one of the grand challenges of our times.

This motivates the following questions:
(i) How to design AI agents that are guaranteed to adhere to social norms in terms of fairness, trustworthiness, and privacy?
(ii) What are the fundamental limits on the fairness, trustworthiness, and privacy of AI agents?
(iii) How to design tools that can automatically analyze and verify the fairness, trustworthiness, and privacy of neural networks?

Related Publications:
o
CertiFair: A Framework for Certified Global Fairness of Neural Networks (AAAI 2023)

o DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation (AAAI 2024)

o BERN-NN-IBF: Enhancing Neural Network Bound Propagation Through Implicit Bernstein Form and Optimized Tensor Operations (IEEE TCAD 2024)

o Fast BATLLNN: Fast Box Analysis of Two-Level Lattice Neural Networks (HSCC 2023)

o PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier (CAV 2021).

(2) Assured Autonomy in Robotics & Smart Cities

Regardless of how many situations autonomous systems were trained (or programmed) to consider, these systems will always face new situations that the human designer failed to examine during the design phase. The problem is exacerbated in cases when malicious agents are present. In such circumstances, malicious agents could exploit the lack of autonomous system’s capability to adapt to new situations that impose safety-threatening scenarios.

This motivates the following questions:
(i) How to assure the trustworthiness of autonomous systems in unforeseen environments?
(ii) How to certify the correctness of machine learning models used to design autonomous systems?
(iii) How to design self-adapting autonomous systems that can learn and adapt from adversarial situations and acquire a level
of resilience against the unforeseen, while still satisfying strong theoretical guarantees?

Related Publications:
o
Neurosymbolic Motion and Task Planning for Linear Temporal Logic Tasks (IEEE Transactions on Robotics 2024)

o Certified Vision-based State Estimation using Geometric Generative Models (IEEE TNNLS)

o NNLander-VeriF: A Neural Network Formal Verification Framework for Vision-Based Autonomous Aircraft Landing (NASA Formal Methods 2022)

o Extracting Forward Invariant Sets from Neural Network-Based Control Barrier Functions (HSCC 2025)

o Certified Learning-Enabled Noise-Aware Motion
Planning for Urban Air Mobility (IEEE T-RO)

(3) Attack-Resilient Cyber-Physical Systems

The rapidly increasing dependence on CPS in building critical infrastructures—in the context of smart cities, power grids, medical devices, and unmanned vehicles—will open the gates to increasingly sophisticated and harmful attacks with financial, societal, criminal or political effects. While a traditional cyber-attack may leak credit-card or other personal sensitive information, a cyber-physical-attack can lead to a loss of control in nuclear reactors, gas turbines, the power grid, transportation networks, and other critical infrastructure, placing the Nation’s security, economy, and public safety at risk.

This motivates the following questions:
(i) To what extent an attacker can influence the behavior of a CPS? How fatal are such attacks?
(ii) Can traditional cyber-security countermeasures be used to detect and mitigate attacks on CPS?
(ii) How to develop a automated tools to design and ensure the resilience of CPS?

Related Publications:
o Rampo: A CEGAR-based Integration of Binary Code Analysis and System Falsification for Cyber-Kinetic Vulnerability Detection (ICCPS 2024)

o Sybil-attack resilient traffic networks: A physics-based trust propagation approach (ICCPS 2018)

o Secure state estimation under sensor attacks: A Satisfiability Modulo Theory approach (IEEE TAC 2017)

o PrOLoc: Resilient localization using private observers (IPSN 2017)

o PyCRA: physical challenge-response authentication for active sensors under spoofing attacks (CCS 2015)

(4) Formal Verification Tools for Complex CPS

The infamous Boeing 737-MAX crashes are dire warnings of the future of interaction between software and physical systems, especially in safety-critical cyber-physical systems (CPSs). In such systems, and driven by the passion for more cyber autonomy, the software is designed to take actions that affect humans’ physical environment. Moreover, in many situations, the software is even permitted to override actions and decisions made by its human operators.

This motivates the following questions:
(i) How to develop and deploy mathematically-rigorous and algorithmically- efficient solutions to formally verify the correct behavior of complex CPSs?
(ii) How to design tools can provide a scientific basis to understand the fundamental properties of complex CPSs, guide their design, and help software developers in diagnosing
and eliminating their bugs?

Related Publications:
o
SMC: Satisfiability Modulo Convex programming (Proceedings of the IEEE 2018).

o PolyARBerNN: A Neural Network Guided Solver and Optimizer for Bounded Polynomial Inequalities (ACM TECS 2024).

o PolyAR: A Highly Parallelizable Solver For Polynomial Inequality Constraints Using Convex Abstraction Refinement (IFAC ADHS 2021)

o EnergyShield: Provably-Safe Offloading of Neural Network Controllers for Energy Efficiency (ICCPS 2023)

Current Projects

ScenicMR: Personalized, Privacy-Preserving, Mixed-Reality Platform for Home-Based Patient Rehabilitation (UC Noyce Initiative 2024-2025)

Every year, over 800,000 people in the United States suffer from stroke. Stroke rehabilitation administered through virtual and augmented reality (VR/AR) – collectively known as mixed reality (MR) – with visual and auditory guidance by the MR system can increase the amount of rehabilitation that a patient receives while lowering the need for one-on-one supervision by therapists. However, in practice, personalizing the therapy for each patient’s unique physical deficits, rehabilitation goals, and home environments requires an impractically large number of software developers to program exercises, driving up the cost of care. Our key innovation empowers therapists (non-programmers) to independently author personalized exercise programs in AR, aided by generative AI and our Scenic programming language technology, in a privacy-preserving manner, while holistically incorporating individual needs. It makes significant technical contributions to multiple fields, including stroke rehabilitation, AI-based code generation, mixed reality systems, human-computer interaction, and security and privacy.

The objective of our research is to develop a novel computational platform for personalized, accessible, and affordable therapy for stroke patients to continue their physical rehabilitation from their homes. We have been developing a medical app for commodity augmented reality (AR) headsets along with a software system that will be used by medical personnel to interact with and personalize the patient’s treatment.

Community-Driven Design of Fair, Urban Air Mobility Transportation Management Systems (NSF SCC 2023-2027)

Urban Air Mobility (UAM) envisions integrating the sky space into the transportation network and encompasses services such as delivery drones, on-demand shared mobility by Vertical-Take Off and Landing (VTOL) aircraft for intra-city passenger trips, and, in the longer run, electric and autonomous VTOLs. This possible modal alternative provides a safe, reliable, and environmentally sound option to reduce surface-level congestion. Nevertheless, the history of transportation infrastructure development shows that it is imperative to design transportation infrastructures with the community to find the best balance between these sociotechnical requirements. Much research shows that the design of transportation systems has a long-lasting, often discriminatory effect that reinforces existing socio-economic inequality. As UAM is being developed as a new transportation mode, we are at an opportune moment to design its infrastructure to provide effective and equitable air mobility for all, avoiding our past mistakes. This project will focus on understanding the preferences, attitudes, and concerns of all stakeholders of UAM, including the potential users of UAM, the general public in different communities who may be positively and/or adversely affected by UAM, policymakers, and city planners. The knowledge elicited from the stakeholders will guide the design of an open-source Computer Aided Planning tool that policy-makers and urban planners can use to design UAM infrastructure that accommodates communities? priorities and enables transportation equity. While the timeline for UAM may be in the future, its deployment may entail significant future investment in infrastructure which makes inclusion of equity considerations and early community engagement critical.

We propose a ”Community-in-the-Loop Integrative Framework for Fair and Equitable Urban Air Mobility (UAM) Infrastructure Design”. Our integrative framework will develop methods to engage with key stakeholders to address significant socio-technical challenges, including (a) understanding the community preferences and desiderata in terms of necessary considerations for equitable mobility, (b) developing novel machine learning techniques to generate design options that optimize for community desiderata efficiently and (c) devising community-driven evaluative measures and trade-off decision mechanisms. We address these challenges by drawing from urban and transportation engineering, aerospace, and computer and information sciences. The final product of our framework is an open-source Computer Aided Planning tool called VertiCAP. VertiCAP will be equipped with novel machine learning-based algorithms to navigate complex design space options, including long-term decisions (i.e., allocation of UAM airports, also known as vertiports), medium-term decisions (i.e., design of air space), and short-term decisions (i.e., air-traffic control). We will establish a ”community council” representing different stakeholders. Through continuous interactions with the community council, we will evaluate and demonstrate the effectiveness of the developed VertiCAP tool in the City of Austin, TX and Southern California.

ASTrA: Automated Synthesis for Trustworthy Autonomous Utility Services (NSF CPS, 2021-2025)

Large-scale systems with societal relevance, such as power generation systems, are increasingly able to leverage new technologies to mitigate their environmental impact, e.g., by harvesting energy from renewable sources. This NSF CPS project aims to investigate methods and computational tools to design a new user-centric paradigm for energy apportionment and distribution and, more broadly, for trustworthy utility services. In this paradigm, distributed networked systems will assist the end users of electricity in scheduling and apportioning their consumption. Further, they will enable local and national utility managers to optimize the use of green energy sources while mitigating the effects of intermittence, promote fairness, equity, and affordability. This project pursues a tractable approach to address the challenges of modeling and designing these large-scale, mixed-autonomy, multi-agent CPSs. The intellectual merits include new scalable methods, algorithms, and tools for the design of distributed decision-making strategies and system architectures that can assist the end users in meeting their goals while guaranteeing compliance with the fairness, reliability, and physical constraints of the design.

The proposed framework, termed Automated Synthesis for Trustworthy Autonomous Utility Services (ASTrA), addresses the design challenges via a three-pronged approach. It uses population games to model the effect of distributed decision-making infrastructures (DMI) on large populations of strategic agents. DMIs will be realized via dedicated networked hybrid hardware architectures and algorithms we seek to design. ASTrA further introduces a systematic, layered methodology to automate the design, verification, and validation of DMIs from expressive representations of the requirements. Finally, it offers a set of cutting-edge computational tools to facilitate our methodology by enabling efficient reasoning about the interaction between discrete models, e.g., used to describe complex missions or embedded software components, and continuous models used to describe physical processes. The evaluation plan involves experimentation on a real testbed designed for zero-net-energy applications.

Decision Procedures for High-Assurance, AI-controlled CPS (NSF CAREER, 2019-2024)

From simple logical constructs to complex deep neural network models, Artificial Intelligence (AI) agents are increasingly controlling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of AI-controlled Cyber-Physical Systems (CPS) to name a few. However, regardless of the explosion in the use of AI within a multitude of CPS domains, the safety and reliability of these AI-controlled CPS is still an understudied problem. It is then unsurprising the failure of these systems in several, safety-critical, situations leading to human fatalities.

Mathematically based techniques for the specification, development, and verification of software and hardware systems, also known as formal methods, hold the promise to provide appropriate rigorous analysis of the reliability and safety of engineered systems. Unfortunately, there is currently a huge disconnect within the field of CPS formal verification. On the one side, the celebrated achievements in software and hardware verification field are inadequate of generalizing to analyze the dynamics of the underlying physical components of CPS. On the other hand, the techniques developed in the control theory community have focused mainly on the formal verification of the dynamics of physical/mechanical systems while ignoring the complexity of the computational and cyber components. This project isolates the key reasons for those gaps and addresses them systematically leading to a new generation of formal methods that are capable of simultaneously analyzing the cyber components (including AI-agents, in general, and deep neural networks, in particular) and the physical components of CPS. This new generation of formal methods will be used to analyze the safety and reliability of AI-controlled CPS, characterize the environments for which the system is guaranteed to operate correctly, automatically find vulnerabilities in AI logic, and predict their failure at real-time.

Resilient-by-Cognition Cyber-Physical Systems (NSF CPS 2018-2023)

State-of-the-art cyber-physical systems (CPS) are far from being fully autonomous. Regardless of how many situations they were trained (or programmed) to consider, autonomous CPS will always face new situations which the human designer failed to examine during the design phase. The problem is exacerbated in cases when malicious agents are present. In such circumstances, malicious agents could exploit the lack of CPS capability to adapt to new situations that impose safety-threatening scenarios. This opens the question of how to equip CPS with a layer of “cognition” that allows CPS to learn and adapt from adversarial situations and acquire a level of resilience against the unforeseen, while still satisfying strong theoretical guarantees. We refer to CPS with such capabilities as “Resilient-by-Cognition CPS” which is the main focus of this project.

In biological context, adversarial situations allowed biological species to evolve and thrive in their nature. Relevant and applicable insights are found in the study of nature, such as threat avoidance behavior in starling flocks, and prey-capture abilities of echolocating bats. Inspired by the current advances in the field of evolutionary games (a mathematical framework that models the evolution of biological systems), we focus, in this project, on developing a rigorous framework for reasoning about resilient-by-cognition CPS by combining ideas from game theory, formal methods, and controls.

Automatic Vulnerability Detection in Robotic Software (UMD/NGC 2017-2019)

Several formal methods have been developed over the years to perform static analysis of software logic to discover software-related vulnerabilities such as buffer overflows, string format exploits, and general memory corruption. A common thread in these tools is to parse the software code into an intermediate verification language which is then translated into a set of logical constraints, in the spirit of bounded model checking. Along with this encoding comes a set of pre- and post-conditions that capture the formal property that needs to be verified. An SMT solver is then used to localize errors and provide traces which reach the error location in the software code. Unfortunately, such software verification tools are incapable of reasoning about system-level vulnerabilities which take into consideration the safety and reliability of the physical system. This stems from the weakness of state-of-the-art SMT solvers to reason about complex nonlinear constraints that appear naturally when the continuous dynamics of the physical system is encoded. In this project, we investigate novel formal verification tools that can analyze the robotic software while taking into account the underlying the physics of the robot in order to automatically find software vulnerabilities that can hinder the safety of the robotic system.

Privacy-Aware Control of IoT (NSF SaTC)

As cyber-physical technology permeates every corner of our lives today, a large amount of sensory data is collected, fused, and then used to orchestrate these systems. Unfortunately, this collected sensor information can have severe privacy implications. This motivates the need for building practical systems that perform sensor fusion on encrypted sensory data. One yet unexplored method of preserving the privacy of sensor data is to encrypt all sensory data before its transmission to a central controller. This would require the control software to perform all necessary calculations over encrypted data, and consequently, the resulting actuation signal itself would be encrypted as well. Such techniques can be found in the literature on Secure Function Evaluation (SFE) which include techniques like homomorphic cryptography, Multi-Party Computation schemes including garbled circuits, secret sharing, and others.

On the one hand, a straightforward solution is to implement existing signal processing and control algorithms–as they are–using one or more of these cryptographic primitives resulting in inadequately high execution times and/or enormous amounts of exchanged communication packets between sensors and the central controller hindering the practical use of such techniques. On the other hand, SFE techniques like Partially Homomorphic Encryption (PHE), for example, have matured in recent years to a point where they can be carried out with much greater efficiency. Unfortunately, current advanced control algorithms have not been designed with privacy in mind and hence they cannot be implemented just using computationally efficient techniques like PHE. This project aims at bridging this gap by combining ideas from convex programming and secure function evaluation towards novel privacy-preserving algorithms for IoT applications… (read more).

SOCIUS: Socially Responsible Smart Cities (NSF S&CC, 2015-2017)

Every year, 3.5 million people in the US experience homelessness, and 1 in 30 children becoming homeless [25]. Despite numerous government-sponsored programs and efforts by nonprofit organizations, many homeless people live in abject conditions; for example, according to the the U.S. Conference of Mayors 2013 Status Report on Hunger & Homelessness in 2013, across the surveyed cities 21% of people who need emergency food assistance received none. Exacerbating the situation, private citizens and businesses cannot directly distribute food themselves because of practical issues such as complaints from neighboring businesses and perceived risks to themselves. Furthermore, 31 cities in the US have tried (some successfully) to pass legislation against giving food to people in need in public spaces. Rethinking smart city technologies to best serve these people will be essential to maximize their access to services. We envision a socially-aware smart city as one that utilizes its cyber-physical infrastructure for better management of services1. In other words, a socially-aware smart city is one that is responsible for monitoring the needs of its citizens and maximizing the social satisfaction of the myriad stakeholders (government, private citizens, NGOs, …). We argue that by smartly managing the efforts of public services, NGOs and private citizens, a smart city can improve the efficiency of matching the varying and unpredictable supply with those who need it.

Our long vision of socially-aware smart cities is to examine the theoretical foundations of population modeling, the analysis and design of socially-aware, human-centered planning algorithms, and technological advances that emphasize on new secure and privacy-aware sensing modalities and mobile technologies for sensing and delivery of services in a manner that maximizes social welfare of both the people in need as well as the general population. Arguably, this vision is multidisciplinary and involves many research activities from both engineering and social aspects where societal perception of utility, privacy, and security are key challenges in addressing these research problems.