Active Projects

Decision Procedures for High-Assurance,
AI-controlled CPS (NSF, CAREER)

From simple logical constructs to complex deep neural network models, Artificial Intelligence (AI) agents are increasingly controlling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of AI-controlled Cyber-Physical Systems (CPS) to name a few. However, regardless of the explosion in the use of AI within a multitude of CPS domains, the safety and reliability of these AI-controlled CPS is still an understudied problem. It is then unsurprising the failure of these systems in several, safety-critical, situations leading to human fatalities.

Mathematically based techniques for the specification, development, and verification of software and hardware systems, also known as formal methods, hold the promise to provide appropriate rigorous analysis of the reliability and safety of engineered systems. Unfortunately, there is currently a huge disconnect within the field of CPS formal verification. On the one side, the celebrated achievements in software and hardware verification field are inadequate of generalizing to analyze the dynamics of the underlying physical components of CPS. On the other hand, the techniques developed in the control theory community have focused mainly on the formal verification of the dynamics of physical/mechanical systems while ignoring the complexity of the computational and cyber components. This project isolates the key reasons for those gaps and addresses them systematically leading to a new generation of formal methods that are capable of simultaneously analyzing the cyber components (including AI-agents, in general, and deep neural networks, in particular) and the physical components of CPS. This new generation of formal methods will be used to analyze the safety and reliability of AI-controlled CPS, characterize the environments for which the system is guaranteed to operate correctly, automatically find vulnerabilities in AI logic, and predict their failure at real-time.

Resilient-by-Cognition Cyber-Physical Systems
(NSF, CPS)

State-of-the-art cyber-physical systems (CPS) are far from being fully autonomous. Regardless of how many situations they were trained (or programmed) to consider, autonomous CPS will always face new situations which the human designer failed to examine during the design phase. The problem is exacerbated in cases when malicious agents are present. In such circumstances, malicious agents could exploit the lack of CPS capability to adapt to new situations that impose safety-threatening scenarios. This opens the question of how to equip CPS with a layer of “cognition” that allows CPS to learn and adapt from adversarial situations and acquire a level of resilience against the unforeseen, while still satisfying strong theoretical guarantees. We refer to CPS with such capabilities as “Resilient-by-Cognition CPS” which is the main focus of this project.

In biological context, adversarial situations allowed biological species to evolve and thrive in their nature. Relevant and applicable insights are found in the study of nature, such as threat avoidance behavior in starling flocks, and prey-capture abilities of echolocating bats. Inspired by the current advances in the field of evolutionary games (a mathematical framework that models the evolution of biological systems), we focus, in this project, on developing a rigorous framework for reasoning about resilient-by-cognition CPS by combining ideas from game theory, formal methods, and controls.

Security and Networking Co-Design for Industrial IoT (Northrop Grumman)

A recent trend in IoT security is the exploitation of the heterogeneous redundancy in data collected by several nodes to provide security checks. In other words, while sensors are collecting data from different sources, these are dependent and hence a malicious change in some of those sensory data can be discovered by fusing the data from the rest of the sensors. However, a major question is where to place those sensors in the first place in a way that maximizes the ability to detect maliciously corrupted sensory data. A closely related question is how to design optimal transmission policies that allow those wireless sensors to maximize the utilization of the shared wireless channel. Unfortunately, there is a tradeoff between these two objectives; security and network utilization. Within this project, we focus on exploring this tradeoff with the aim to develop new algorithms and tools that allow engineers to co-design security and networking for IIoT systems.

Automatic Vulnerability Detection in Robotic Software

Several formal methods have been developed over the years to perform static analysis of software logic to discover software-related vulnerabilities such as buffer overflows, string format exploits, and general memory corruption. A common thread in these tools is to parse the software code into an intermediate verification language which is then translated into a set of logical constraints, in the spirit of bounded model checking. Along with this encoding comes a set of pre- and post-conditions that capture the formal property that needs to be verified. An SMT solver is then used to localize errors and provide traces which reach the error location in the software code. Unfortunately, such software verification tools are incapable of reasoning about system-level vulnerabilities which take into consideration the safety and reliability of the physical system. This stems from the weakness of state-of-the-art SMT solvers to reason about complex nonlinear constraints that appear naturally when the continuous dynamics of the physical system is encoded. In this project, we investigate novel formal verification tools that can analyze the robotic software while taking into account the underlying the physics of the robot in order to automatically find software vulnerabilities that can hinder the safety of the robotic system.

Past Projects

Attack Resilient Law Enforcement Drones
(Northrop Grumman)

The use of unmanned aerial vehicles (UAVs) in the context of law enforcement agencies has witnessed a rapid growth over the past few years in the United States. In 2016, at least 167 law enforcement agencies acquired drones which is more than the number of drones acquired in the previous three years combined. This rapid growth in adopting UAVs by law enforcement agencies is derived from the need for a bird’s eye view of crime and disaster scenes to enhance surveillance capabilities and improve situational awareness during natural disasters and catastrophic incidents. Recent successful examples of adapting remote-controlled drones include the use of a drone equipped with a thermal imaging camera to find a man who fled an accident scene and the usage of a drone to reclaim stolen construction equipment.

However, and with this increase in criminal’s technological abilities, it will not be surprising to see criminals adapting recently reported attacks against law enforcement drones. The long view of this project is multifold. First, we seek to understand and model potential new threats and system vulnerabilities in drones when used in critical and high-risk tactical missions. Second, we seek to develop principled methods to analyze the data collected from various sensors to detect these attacks while preserving the privacy of public citizens whose sensitive data may have been captured by drone sensors. Third, to be resilient against software vulnerabilities, we seek to explore the usage of hardware-assisted cyber resilient platforms like ARM TrustZone. Fourth, we seek to develop novel control algorithms triggered by detection of attacks that are resilient to these attacks and which can, formally, guarantee the satisfaction of these high-risk tactical missions regardless the existence of these attacks. To achieve this level of resilience, drones need to be able to adapt and react to different attacks in real-time. Accordingly, an overarching objective of this project is to ensure that all developed algorithms come with real-time guarantees in addition to privacy and resilience guarantees.

Attack-Resilient Cyber-Physical Systems

The rapidly increasing dependence on CPS in building critical infrastructures—in the context of smart cities, power grids, medical devices, and unmanned vehicles—will open the gates to increasingly sophisticated and harmful attacks with financial, societal, criminal or political effects. While a traditional cyber-attack may leak credit-card or other personal sensitive information, a cyber-physical-attack can lead to a loss of control in nuclear reactors, gas turbines, the power grid, transportation networks, and other critical infrastructure, placing the Nation’s security, economy, and public safety at risk. This motivates the following questions:

  • To what extent an attacker can influence the behavior of a CPS? How fatal are such attacks?
  • Can traditional cyber-security countermeasures be used to detect and mitigate attacks on CPS?
  • How to develop a holistic approach to design resilient CPS?

Privacy-Aware Control of IoT

As cyber-physical technology permeates every corner of our lives today, a large amount of sensory data is collected, fused, and then used to orchestrate these systems. Unfortunately, this collected sensor information can have severe privacy implications. This motivates the need for building practical systems that perform sensor fusion on encrypted sensory data. One yet unexplored method of preserving the privacy of sensor data is to encrypt all sensory data before its transmission to a central controller. This would require the control software to perform all necessary calculations over encrypted data, and consequently, the resulting actuation signal itself would be encrypted as well. Such techniques can be found in the literature on Secure Function Evaluation (SFE) which include techniques like homomorphic cryptography, Multi-Party Computation schemes including garbled circuits, secret sharing, and others.

On the one hand, a straightforward solution is to implement existing signal processing and control algorithms–as they are–using one or more of these cryptographic primitives resulting in inadequately high execution times and/or enormous amounts of exchanged communication packets between sensors and the central controller hindering the practical use of such techniques. On the other hand, SFE techniques like Partially Homomorphic Encryption (PHE), for example, have matured in recent years to a point where they can be carried out with much greater efficiency. Unfortunately, current advanced control algorithms have not been designed with privacy in mind and hence they cannot be implemented just using computationally efficient techniques like PHE. This project aims at bridging this gap by combining ideas from convex programming and secure function evaluation towards novel privacy-preserving algorithms for IoT applications… (read more).

SOCIUS: Socially Responsible Smart Cities (NSF, S&CC)

Every year, 3.5 million people in the US experience homelessness, and 1 in 30 children becoming homeless [25]. Despite numerous government-sponsored programs and efforts by nonprofit organizations, many homeless people live in abject conditions; for example, according to the the U.S. Conference of Mayors 2013 Status Report on Hunger & Homelessness in 2013, across the surveyed cities 21% of people who need emergency food assistance received none. Exacerbating the situation, private citizens and businesses cannot directly distribute food themselves because of practical issues such as complaints from neighboring businesses and perceived risks to themselves. Furthermore, 31 cities in the US have tried (some successfully) to pass legislation against giving food to people in need in public spaces. Rethinking smart city technologies to best serve these people will be essential to maximize their access to services. We envision a socially-aware smart city as one that utilizes its cyber-physical infrastructure for better management of services1. In other words, a socially-aware smart city is one that is responsible for monitoring the needs of its citizens and maximizing the social satisfaction of the myriad stakeholders (government, private citizens, NGOs, …). We argue that by smartly managing the efforts of public services, NGOs and private citizens, a smart city can improve the efficiency of matching the varying and unpredictable supply with those who need it.

Our long vision of socially-aware smart cities is to examine the theoretical foundations of population modeling, the analysis and design of socially-aware, human-centered planning algorithms, and technological advances that emphasize on new secure and privacy-aware sensing modalities and mobile technologies for sensing and delivery of services in a manner that maximizes social welfare of both the people in need as well as the general population. Arguably, this vision is multidisciplinary and involves many research activities from both engineering and social aspects where societal perception of utility, privacy, and security are key challenges in addressing these research problems.