We study the problem of fitting a piecewise affine (PWA) function to inputâoutput data. Our algorithm divides the input domain into finitely many regions whose shapes are specified by a user-provided template and such that the input-output data in each region are fit by an affine function within a user-provided error tolerance. We first prove that this problem is NP-hard. Then, we present a top-down algorithmic approach for solving the problem. The algorithm considers subsets of the data points in a systematic manner, trying to fit an affine function for each subset using linear regression. If regression fails on a subset, the algorithm extracts a minimal set of points from the subset (an unsatisfiable core) that is responsible for the failure. The identified core is then used to split the current subset into smaller ones. By combining this top-down scheme with a set-covering algorithm, we derive an overall approach that provides optimal PWA models for a given error tolerance, where optimality refers to minimizing the number of pieces of the PWA model. We demonstrate our approach on three numerical examples that include PWA approximations of a widely used nonlinear insulin-glucose regulation model and a double inverted pendulum with soft contacts.
Anticipating Oblivious Opponents in Stochastic Games
Shadi Tasdighi Kalat,
Sriram Sankaranarayanan,
and Ashutosh Trivedi
In Workshop on Algorithmic Foundations of Robotics (WAFR),
pp. TBA,
2024.
We present an approach for systematically anticipating the actions and policies employed by oblivious environments in concurrent stochastic
games, while maximizing a reward function. Our main contribution lies in
the synthesis of a finite information state machine (ISM) whose alphabet
ranges over the actions of the environment. Each state of the ISM is mapped
to a belief state about the policy used by the environment. We introduce a
notion of consistency that guarantees that the belief states tracked by the
ISM stays within a fixed distance of the precise belief state obtained by
knowledge of the full history. We provide methods for checking consistency of
an automaton and a synthesis approach which, upon successful termination,
yields an ISM. We construct a Markov Decision Process (MDP) that serves
as the starting point for computing optimal policies for maximizing a reward
function defined over plays. We present an experimental evaluation over
benchmark examples including human activity data for tasks such as cataract
surgery and furniture assembly, wherein our approach successfully anticipates
the policies and actions of the environment in order to maximize the reward.
Automated Assessment and Adaptive Multimodal Formative Feedback Improves Psychomotor Skills Training Outcomes in Quadrotor
Teleoperation
Emily Jensen,
Sriram Sankaranarayanan,
and Bradley Hayes
In Proceedings of Human Agent Interaction (HAI),
pp. TBA,
2024.
The workforce will need to continually upskill in order to meet the
evolving demands of industry, especially working with robotic
and autonomous systems. Current training methods are not
scalable and do not adapt to the skills that learners already
possess. In this work, we develop a system that automatically
assesses learner skill in a quadrotor teleoperation task using
temporal logic task specifications. This assessment is used to
generate multimodal feedback based on the principles of
effective formative feedback. Participants perceived the
feedback positively. Those receiving formative feedback viewed
the feedback as more actionable compared to receiving summary
statistics. Participants in the multimodal feedback condition
were more likely to achieve a safe landing and increased their
safe landings more over the experiment compared to other
feedback conditions. Finally, we identify themes to improve
adaptive feedback and discuss and how training for complex
psychomotor tasks can be integrated with learning theories.
What is your discount factor?
Shadi Tasdighi Kalat,
Sriram Sankaranarayanan,
and Ashutosh Trivedi
In Proc. of QEST+FORMATSâ24,
Lecture Notes in Computer Science
Vol. 14996,
pp. 322-336,
2024.
We study the problem of inferring the discount factor of an agent
optimizing a discounted reward objective in a finite state
Markov Decision Process (MDP). Discounted reward objectives
are common in sequential optimization, reinforcement learning,
and algorithmic game theory. The discount factor is an
important parameter used in formulating the discounted reward.
It captures the âtime valueâ of the reward - i.e., how much
reward at hand would equal a promised reward at a future time.
Knowing an agentâs discount factor can provide valuable
insights into their decision-making, and help predict their
preferences in previously unseen environments. However,
pinpointing the exact value of the discount factor used by the
agent is a challenging problem. Ad-hoc guesses are often
incorrect. This paper focuses on the problem of computing the
range of possible discount factors for a rational agent given
their policy. A naive solution to this problem can be quite
expensive. A classic result by Smallwood shows that the
interval [0, 1) of possible discount factor can be partitioned
into finitely many sub-intervals, such that the optimal policy
remains the same for each such sub-interval. Furthermore,
optimal policies for neighboring sub-intervals differ for a
single state. We show how Smallwoodâs result can be exploited
to search for discount factor intervals for which a given
policy is optimal by reducing it to polynomial root isolation.
We extend the result to situations where the policy is
suboptimal, but with a value function that is close to
optimal. We develop numerical approaches to solve the discount
factor elicitation problem and demonstrate the effectiveness
of our algorithms through some case studies.
Optimal Planning for Timed Partial Order Specifications
This paper addresses the challenge of planning a sequence of tasks to
be performed by multiple robots while minimizing the overall
completion time subject to timing and precedence constraints.
Our approach uses the Timed Partial Orders (TPO) model to
specify these constraints. We translate this problem into a
Traveling Salesman Problem (TSP) variant with timing and
precedent constraints, and we solve it as a Mixed Integer
Linear Programming (MILP) problem. Our contributions include a
general planning framework for TPO specifications, a MILP
formulation accommodating time windows and precedent
constraints, its extension to multi-robot scenarios, and a
method to quantify plan robustness. We demonstrate our
framework on several case studies, including an aircraft
turnaround task involving three Jackal robots, highlighting
the approachâs potential applicability to important real-world
problems. Our benchmark results show that our MILP method
outperforms state-of-the-art open-source TSP solvers OR-Tools.
Algorithms for Identifying Flagged and Guarded Linear Systems
Guillaume O. Berger,
Monal Narasimhamurthy,
and Sriram Sankaranarayanan
In Hybrid Systems: Computation and Control (HSCC),
pp. 15:1â15:13,
2024.
We present an approach for identifying two subclasses of
piecewise affine (PWA) systems that we call flagged and
guarded linear systems. Flagged linear system dynamics are
given by a sum of k linear dynamical modes, each activated
based on a latent binary variable, called a flag.
Additionally, guarded linear systems define each flag as the
sign of an affine "guard" function. We term the discovery of
the latent flag values and the corresponding linear dynamics
as the "flagged regression" and "guarded regression" problems,
respectively. We show that the system identification problem
is NP-hard even for these models, making the identification
problem computationally challenging. For both problems, we
provide approximation algorithms that identify a model whose
error is within some user-defined constant away from the
optimum. The time complexity of these algorithms is linear in
the number of data points but exponential in the state-space
dimension and the number of flags. The linear complexity in
data size allows our approach to potentially scale to large
data sets. We evaluate our algorithms on benchmark problems in
order to learn models for mechanical systems with contact
forces and a nonlinear robotic arm benchmark. Our approach
compares favorably against neural network learning and the
PARC algorithm for identifying PWA models proposed by
Bemporad.
This paper presents temporal behavior trees (TBT), a
specification formalism inspired by behavior trees that are
commonly used to program robotic applications. We then
introduce the concept of trace segmentation, wherein given a
TBT specification and a trace, we split the trace optimally
into sub-traces that are associated with various portions of
the TBT specification. Segmentation of a trace then serves to
explain precisely how a trace satisfies or violates a
specification, and which portions of a specification are
actually violated. We introduce the syntax and semantics of
TBT and compare their expressiveness in relation to temporal
logic. Next, we define robustness semantics for TBT
specification with respect to a trace. Rather than a Boolean
interpretation, the robustness provides a real-valued
numerical outcome that quantifies how close or far away a
trace is from satisfying or violating a TBT specification. We
show that computing the robustness of a trace also segments it
into subtraces. Finally, we provide efficient approximations
for computing robustness and segmentation for long traces with
guarantees on the result. We demonstrate how segmentations are
useful through applications such as understanding how novice
users pilot an aerial vehicle through a sequence of waypoints
in desktop experiments and the offline monitoring of automated
lander for a drone on a ship. Our case studies demonstrate how
TBT specification and segmentation can be used to understand
and interpret complex behaviors of humans and automation in
cyber-physical systems.
We present an abstract interpretation approach for synthesizing nonlinear (semi-algebraic) positive invariants for systems of polynomial ordinary differential equations (ODEs) and switched systems.
The key behind our approach is to connect the system under study to a positive nonlinear system through a âchange of variablesâ.
The positive invariance of the first orthant (R+) for a positive system guarantees, in turn, that the functions involved in the change of variables define a positive invariant for the original system.
The challenge lies in discovering such functions for a given system.
To this end, we characterize positive invariants as fixed points under an operator that is defined using the Lie derivative.
Next, we use abstract-interpretation approaches to systematically compute this fixed point.
Whereas abstract interpretation has been applied to the static analysis of programs, and invariant synthesis for hybrid systems to a limited extent, we show how these approaches can compute fixed points over cones generated by polynomials using sum-of-squares optimization and its relaxations.
Our approach is shown to be promising over a set of small but hard-to-analyze nonlinear models, wherein it is able to generate positive invariants to place useful bounds on their reachable sets.
2023
Counterexample-guided computation of polyhedral Lyapunov functions for piecewise linear systems
This paper presents a counterexample-guided iterative algorithm to compute convex, piecewise linear (polyhedral) Lyapunov functions for continuous-time piecewise linear systems. Polyhedral Lyapunov functions provide an alternative to commonly used polynomial Lyapunov functions. Our approach first characterizes intrinsic properties of a polyhedral Lyapunov function including its âeccentricityâ and ârobustnessâ to perturbations. We then derive an algorithm that either computes a polyhedral Lyapunov function proving that the system is asymptotically stable, or concludes that no polyhedral Lyapunov function exists whose eccentricity and robustness parameters satisfy some user-provided limits. Significantly, our approach places no a-priori bound on the number of linear pieces that make up the desired polyhedral Lyapunov function. The algorithm alternates between a learning step and a verification step, always maintaining a finite set of witness states. The learning step solves a linear program to compute a candidate Lyapunov function compatible with a finite set of witness states. In the verification step, our approach verifies whether the candidate Lyapunov function is a valid Lyapunov function for the system. If verification fails, we obtain a new witness. We prove a theoretical bound on the maximum number of iterations needed by our algorithm. We demonstrate the applicability of the algorithm on numerical examples.
Temporal Logic-Based Intent Monitoring for Mobile Robots
We propose a framework that uses temporal logic specifications to predict and monitor the intent of a robotic agent through passive observations of its actions over time. Our approach uses a set of possible hypothesized intents specified as Buchi automata, obtained from translating temporal logic formulae. Based on observing the actions of the robot, we update the probabilities of each hypothesis using Bayes rule.
Observations of robot actions provide strong evidence for its "immediate" short-term goals, whereas temporal logic specifications describe behaviors over a "never-ending" infinite time horizon. To bridge this gap,
we use a two-level hierarchical monitoring approach. At the lower level, we track the immediate short-term goals of the robot which are modeled as atomic propositions in the temporal logic formalism. We apply our approach to predicting intent of human workers and thus their movements in an indoor space based on the publicly available THOR dataset. We show how our approach correctly labels each agent with their appropriate intents after relatively few observations while predicting their future actions accurately over longer time horizons.
Template-Based Piecewise Affine Regression
Guillaume O. Berger,
and Sriram Sankaranarayanan
In Conference on Learning for Decision and Control (L4DC),
2023.
We investigate the problem of fitting piecewise affine functions (PWA) to data.
Our algorithm divides the input domain into finitely many polyhedral regions
whose shapes are specified using a user-defined template such that the data
points in each region are fit by an affine function within a desired error
bound. We first prove that this problem is NP-hard. Next, we present a top-down
algorithm that considers subsets of the overall data set in a systematic manner,
trying to fit an affine function for each subset using linear regression. If
regression fails on a subset, we extract a minimal set of points that led to a
failure in order to split the original index set into smaller subsets. Using a
combination of this top-down scheme and a set covering algorithm, we derive an
overall approach that is optimal in terms of the number of pieces of the
resulting PWA model. We demonstrate our approach on two numerical examples that
include PWA approximations of a widely used nonlinear insulinâglucose
regulation model and a double inverted pendulum with soft contacts.
In this work, we propose the model of timed partial orders (TPOs) for specifying workflow schedules, especially for modeling manufacturing processes. TPOs integrate partial orders over events in a workflow, specifying âhappens-beforeâ relations, with timing constraints specified using guards and resets on clocks â an idea borrowed from timed-automata specifications. TPOs naturally allow us to capture event ordering, along with a restricted but useful class of timing relationships. Next, we consider the problem of mining TPO schedules from workflow logs, which include events along with their time stamps. We demonstrate a relationship between formulating TPOs and the graph-coloring problem, and present an algorithm for learning TPOs with correctness guarantees.
We demonstrate our approach on synthetic datasets, including two datasets inspired by real-life applications of aircraft turnaround and gameplay videos of the Overcooked computer game. Our TPO mining algorithm can infer TPOs involving hundreds of events from thousands of data-points within a few seconds. We show that the resulting TPOs provide useful insights into the dependencies and timing constraints for workflows.
More Than a Number: A Multi-dimensional Framework For Automatically Assessing Human Teleoperation Skill
Emily Jensen,
Bradley Hayes,
and Sriram Sankaranarayanan
In Intl. Conf. Human-Robot
Interaction, HRI 2023 (Late Breaking Report),
pp. 653â657,
2023.
We present a framework for the formal evaluation of human teleoperator skill level in a systematic fashion, aiming to quantify how skillful a particular operator is for a well-defined task. Our proposed framework has two parts. First, the tasks used to evaluate skill levels are decomposed into a series of domain-specific primitives, each with a formal specification using signal temporal logic. Secondly, skill levels are automatically evaluated along multiple dimensions rather than a singular number. These dimensions include robustness, efficiency, resilience and readiness for each primitive task. We provide an initial evaluation for the task of taking-off, hovering, and landing in a drone simulator. This preliminary evaluation shows the value of a multi-dimensional evaluation of human operator performance.
We present an algorithm for learning switched linear dynamical systems in discrete time from noisy observations of the systemâs full state or output. Switched linear systems use multiple linear dynamical modes to fit the data within some desired tolerance. They arise quite naturally in applications to robotics and cyber-physical systems.
Learning switched systems from data is a NP-hard problem that is nearly identical to the k-linear regression problem of fitting k > 1 linear models to the data. A direct mixed-integer linear programming (MILP) approach yields time complexity that is exponential in the number of data points.
In this paper, we modify the problem formulation to yield an algorithm that is linear in the size of the data while remaining exponential in the number of state variables and the desired number of modes. To do so, we combine classic ideas from the ellipsoidal method for solving convex optimization problems, and well-known oracle separation results in non-smooth optimization. We demonstrate our approach on a set of microbenchmarks and a few interesting real-world problems. Our evaluation suggests that the benefits of this algorithm can be made practical even against highly optimized off-the-shelf MILP solvers.
In this paper, we present an algorithm for synthesizing polyhedral Lyapunov functions for hybrid systems with multiple modes, each with linear dynamics and state-based switching between these modes.
The problem of proving global asymptotic stability (GAS) for such systems is quite challenging.
In this paper, we present an algorithm to synthesize a fixed-complexity polyhedral Lyapunov function to prove that such a system is GAS.
Such functions are defined as the piecewise maximum over a fixed number of linear functions.
Previous work on this problem reduces to a system of bilinear constraints that are well-known to be computationally hard to solve precisely.
Therefore, heuristic approaches such as alternating descent have been employed.
In this paper, we first prove that deciding if there exists a polyhedral Lyapunov function for a given piecewise linear system is a NP-hard problem.
We then present a counterexample-guided algorithm for solving this problem.
Our approach alternates between choosing candidate Lyapunov functions based on a finite set of counterexamples, and verifying if these candidates satisfy the Lyapunov conditions.
If the verification of a given candidate fails, we find a new counterexample that is added back to our set.
We prove that if this algorithm terminates, it discovers a valid Lyapunov function or concludes that no such Lyapunov function exists.
However, our initial algorithm can be non-terminating.
We modify our algorithm to provide a terminating version based on the so-called cutting-plane argument from nonsmooth optimization.
We demonstrate our algorithm on small numerical examples.
Using Artificial Potential Fields to Model Driver Situational Awareness
Emily Jensen,
Maya Luster,
Brandon Pitts,
and Sriram Sankaranarayanan
In Workshop on Cyber-Physical Human Systems (CPHS),
2022.
Recently, the use of artificial potential fields, known as risk fields, has been proposed for modeling human driver decision making. Such potential fields map from vehicle states and control inputs to a numerical risk measure such that the probability of choosing a control decreases as the risk associated increases. In this paper, we show that such a model can be used in a natural manner to also capture aspects of the driverâs situational awareness, assuming that the risk fields govern their underlying behavior. We demonstrate our ideas on a specific obstacle avoidance scenario wherein obstacles to be avoided are placed in front of a driver at predicable intervals. Using data collected on a pilot experiment involving six different drivers using a high-fidelity driving simulator, we demonstrate the ability of our approach to capture the likelihood that the driver has perceived/reacted to the obstacle. Our approach works for scenarios when the driver collides with the obstacle as well as scenarios involving successful collision avoidance.
Mathematical Models of Human Drivers using Artificial Risk Fields
In this paper, we use the concept of artificial risk fields to predict how human operators control a vehicle in response to upcoming road situations. A risk field assigns a non-negative risk measure to the state of the system in
order to model how close that state is to violating a safety property, such as hitting an obstacle or exiting the road. Using risk fields, we construct a stochastic model of the operator that maps from states to likely actions. We demonstrate our approach on a driving task wherein human subjects are asked to drive a car inside a realistic driving simulator while avoiding obstacles placed on the road. We show that the most likely risk field given the driving data is obtained by solving a convex optimization problem. Next, we apply the inferred risk fields to generate distinct driving behaviors while comparing predicted trajectories against ground truth measurements. We observe that the risk fields are excellent at predicting future trajectory distributions with high prediction accuracy for up to twenty seconds prediction horizons. At the same time, we observe some challenges such as the inability to account for how drivers choose to accelerate/decelerate based on the road conditions.
Decoding Output Sequences for Discrete-Time Linear Hybrid Systems
Monal Narasimhamurthy,
and Sriram Sankaranarayanan
In ACM International Conference on Hybrid Systems:
Computation and Control (HSCC),
pp. 6:1â6:7,
2022.
In this paper, we study the "decoding" problem for discrete-time, stochastic hybrid systems with linear dynamics in each mode. Given an output trace of the system, the decoding problem seeks to construct a sequence of modes and states that yield a trace "as close as possible" to the original output trace. The decoding problem generalizes the state estimation problem, and is applicable to hybrid systems with non-determinism. The decoding problem is NP-complete, and can be reduced to solving a mixed-integer linear program (MILP). In this paper, we decompose the decoding problem into two parts: (a) finding a sequence of discrete modes and transitions; and (b) finding corresponding continuous states for the mode/transition sequence. In particular, once a sequence of modes/transitions is fixed, the problem of "filling in" the continuous states is performed by a linear programming problem. In order to support the decomposition, we "cover" the set of all possible mode/transition sequences by a finite subset. We use well-known probabilistic arguments to justify a choice of cover with high confidence and design randomized algorithms for finding such covers. Our approach is demonstrated on a series of benchmarks, wherein we observe that relatively tiny fraction of the possible mode/transition sequences can be used as a cover. Furthermore, we show that the resulting linear programs can be solved rapidly by exploiting the tree structure of the set cover.
Reachability Analysis for Cyber-Physical Systems: Are We There Yet? (Invited Paper)
Xin Chen,
and Sriram Sankaranarayanan
In Proc. NASA Formal Methods Symposium,
Lecture Notes in Computer Science
Vol. 13260,
pp. 109â130,
2022.
Reachability analysis is a fundamental problem in verification that checks for a given model and set of initial states if the system will reach a given set of unsafe states. Its importance lies in the ability to exhaustively explore the behaviors of a model over a finite or infinite time horizon. The problem of reachability analysis for Cyber-Physical Systems (CPS) is especially challenging because it involves reasoning about the continuous states of the system as well as its switching behavior. Each of these two aspects can by itself cause the reachability analysis problem to be undecidable.
In this paper, we survey recent progress in this field beginning with the success of hybrid systems with affine dynamics. We then examine the current state-of-the-art for CPS with nonlinear dynamics and those driven by "learning-enabled" components such as neural networks. We conclude with an examination of some promising directions and open challenges.
2021
Static Analysis of ReLU Neural Networks with Tropical Polyhedra
Eric Goubault,
Sebastien Palumby,
Sylvie Putot,
Louis Rustenholz,
and Sriram Sankaranarayanan
In Static Analysis Symposium (SAS),
Lecture Notes in Computer Science
Vol. 12913,
pp. 166â190,
2021.
This paper studies the problem of range analysis for feedforward neural networks, which is a basic primitive for applications such as robustness of neural networks, compliance to specifications and reachability analysis of neural-network feedback systems. Our approach focuses on ReLU (rectified linear unit) feedforward neural nets that present specific difficulties: approaches that exploit derivatives do not apply in general, the number of patterns of neuron activations can be quite large even for small networks, and convex approximations are generally too coarse. In this paper, we employ set-based methods and abstract interpretation that have been very successful in coping with similar difficulties in classical program verification.
We present an approach that abstracts ReLU feedforward neural networks using tropical polyhedra. We show that tropical polyhedra can efficiently abstract ReLU activation function, while being able to control the loss of precision due to linear computations. We show how the connection between ReLU networks and tropical rational functions can provide approaches for range analysis of ReLU neural networks.
Technical perspective: An elegant model for deriving equations
This paper proposes a framework for learning task specifications from
demonstrations, while ensuring that the learned specifications
do not violate safety constraints. Furthermore, we show how
these specifications can be used in a planning problem to
control the robot under environments that can be different
from those encountered during the learning phase. We formulate
the specification learning problem as a grammatical inference
problem, using probabilistic automata to represent
specifications. The edge probabilities of the resulting
automata represent the demonstratorâs preferences. The main
novelty in our approach is to incorporate the safety property
during the learning process. We prove that the resulting
automaton always respects a pre-specified safety property, and
furthermore, the proposed method can easily be included in any
Evidence-Driven State Merging (EDSM)-based automaton learning
scheme. Finally, we introduce a planning algorithm that
produces the most desirable plan by maximizing the probability
of an accepting trace of the automaton. Case studies show that
our algorithm learns the true probability distribution most
accurately while maintaining safety. Since, specification is
detached from the robotâs environment model, a satisfying plan
can be synthesized for a variety of different robots and
environments including both mobile robots and manipulators.
Predictive Runtime Monitoring for Mobile Robots using Logic-Based Bayesian Intent Inference
Hansol Yoon,
and Sriram Sankaranarayanan
In International Conference on Robotics and Automation (ICRA),
pp. 8565-8571,
2021.
We propose a predictive runtime monitoring framework that forecasts the distribution of future positions of mobile robots in order to detect and avoid impending property violations such as collisions with obstacles or other agents. Our approach uses a restricted class of temporal logic formulas to represent the likely intentions of the agents along with a combination of temporal logic-based optimal cost path planning and Bayesian inference to compute the probability of these intents given the current trajectory of the robot. First, we construct a large but finite hypothesis space of possible intents represented as temporal logic formulas whose atomic propositions are derived from a detailed map of the robotâs workspace. Next, our approach uses real-time observations of the robotâs position to update a distribution over temporal logic formulae that represent its likely intent. This is performed by using a combination of optimal cost path planning and a Boltzmann noisy rationality model. In this manner, we construct a Bayesian approach to evaluating the posterior probability of various hypotheses given the observed states and actions of the robot. Finally, we predict the future position of the robot by drawing posterior predictive samples using a Monte-Carlo method. We evaluate our framework using two different trajectory datasets that contain multiple scenarios implementing various tasks. The results show that our method can predict future positions precisely and efficiently, so that the computation time for generating a prediction is a tiny fraction of the overall time horizon.
2020
Reasoning about Uncertainties in Discrete-Time Dynamical Systems using Polynomial Forms
Sriram Sankaranarayanan,
Yi Chou,
Eric Goubault,
and Sylvie Putot
In Advances in Neural Information Processing System (NeurIPS),
2020.
In this paper, we propose polynomial forms to represent distributions of state
variables over time for discrete-time stochastic dynamical systems. This
problem arises in a variety of applications in areas ranging from biology to
robotics. Our approach allows us to rigorously represent the probability
distribution of state variables over time, and provide guaranteed bounds on
the expectations, moments and probabilities of tail events involving the state
variables. First, we recall ideas from interval arithmetic, and use them to
rigorously represent the state variables at time t as a function of the
initial state variables and noise symbols that model the random
exogenous inputs encountered before time t. Next, we show how concentration
of measure inequalities can be employed to prove rigorous bounds on the tail
probabilities of these state variables. We demonstrate interesting
applications that demonstrate how our approach can be useful in some
situations to establish mathematically guaranteed bounds that are of a
different nature from those obtained through simulations with pseudo-random
numbers.
We present a predictive runtime monitoring technique for estimating
future vehicle positions and the probability of collisions
with obstacles. Vehicle dynamics model how the position and
velocity change over time as a function of external inputs.
They are commonly described by discrete-time stochastic
models. Whereas positions and velocities can be measured, the
inputs (steering and throttle) are not directly measurable in
these models. In our paper, we apply Bayesian inference
techniques for real-time estimation, given prior distribution
over the unknowns and noisy state measurements. Next, we
pre-compute the set-valued reachability analysis to
approximate future positions of a vehicle. The pre-computed
reachability sets are combined with the posterior
probabilities computed through Bayesian estimation to provided
a predictive verification framework that can be used to detect
impending collisions with obstacles. Our approach is evaluated
using the coordinated-turn vehicle model for a UAV using
on-board measurement data obtained from a flight test of a
Talon UAV. We also compare the results with sampling-based
approaches. We find that precomputed reachability analysis can
provide accurate warnings up to 6 seconds in advance and the
accuracy of the warnings improve as the time horizon is
narrowed from 6 to 2 seconds. The approach also outperforms
sampling in terms of on-board computation cost and accuracy
measures.
Weighted Transducers for Robustness Verification
Emmanuel Filiot,
Nicolas Mazzocchi,
Jean-Francois Raskin,
Sriram Sankaranarayanan,
and Ashutosh Trivedi
In Intl. Conference on Concurrency Theory (CONCUR),
pp. 17:1â17:21,
2020.
Automata theory provides us with fundamental notions such as
languages, membership, emptiness and inclusion that in turn allow us
to specify and verify properties of reactive systems in a useful
manner. However, these notions all yield "yes"/"no" answers that
sometimes fall short of being satisfactory answers when the models
being analyzed are imperfect, and the observations made are prone to
errors. As a result, a common engineering approach is not just to
verify that a system satisfies a property, but it does so
robustly. In this paper, we present notions of robustness
that place a metric on words, thus providing a natural notion of
distance between words. Such a metric naturally leads to a
topological neighborhood of words and languages, leading to
quantitative and robust versions of the membership, emptiness and
inclusion problems. The main contribution of this work is to study
such problems in the context of weighted transducers. We provide
algorithms for solving the robust and quantitative versions of the
membership and inclusion problems while providing useful motivating
case studies including a modeling an error-prone human operator and
approximate pattern matching in a large type-1 diabetes dataset.
Quantitative Analysis of Programs with Probabilities and Concentration of Measure Inequalities
Sriram Sankaranarayanan
In Foundations of Probabilistic Programming (Editors: Gilles Barthe, Joost-Pieter Katoen, and Alexandra Silva),
2020.
The quantitative analysis of probabilistic programs
answers queries involving the expected values of program variables and
expressions involving them, as well as bounds on the probabilities of
assertions. In this chapter, we will present the use of concentration of
measure inequalities to reason about such bounds. First, we will briefly
present and motivate standard concentration of measure inequalities.
Next, we survey approaches to reason about quantitative properties using
concentration of measure inequalities, illustrating these on numerous
motivating examples. Finally, we discuss currently open challenges in
this area for future work.
Multi-hour Blood Glucose Prediction in T1D: A Patient Specific Approach using Shallow Neural Network Models
Taisa Kushner,
Marc D. Breton,
and Sriram Sankaranarayanan
Diabetes Technology and Therapeutics,
22
(12)
pp. 883-891,
2020.
Considering current insulin action profiles and the nature of glycemic responses to insulin, there is an acute need for longer-term, accurate, blood glucose predictions to inform insulin dosing schedules and enable effective decision support for the treatment of type 1 diabetes (T1D). However, current methods achieve acceptable accuracy only for prediction horizons of up to one hour, whereas typical post-prandial excursions and insulin action profiles last 4-6 hours. In this work, we present models for prediction horizons of 60-240min developed by leveraging âshallowâ neural networks, allowing for significantly lower complexity when compared to related approaches. Methods: Patient-specific neural network-based predictive models are developed and tested on previously collected data from a cohort of twenty-four subjects with T1D. Models are designed to avoid serious pitfalls through incorporating essential physiological knowledge into model structure. Patient-specific models were generated to predict glucose 60, 90, 120, 180, and 240 minutes ahead, and a "transfer learning" approach to improve accuracy for patients where data is limited. Finally, we determined subgroup characteristics which result in higher model accuracy overall. Results: RMSE was 28+/- 4 mg/dL, 33+/-4 mg/dL, 38±6 mg/dL, 40+/-8 mg/dL, and 43+/-12mg/dL for 60, 90, 120, 180, and 240minutes respectively. For all prediction horizons, at least 93% of predictions were clinically acceptable by the Clarke Error Grid. Variance of historic continuous glucose monitor (CGM) values was a strong predictor for the need of transfer learning approaches. Conclusions: A shallow neural network, using features extracted from past CGM data and insulin logs is able to achieve multi-hour glucose predictions with satisfactory accuracy. Models are patient-specific, learned on readily-available data without the need for additional tests, and improve accuracy while lowering complexity when compared to related approaches, paving the way for new advisory and closed loop algorithms able to encompass most of the insulin action time-frame.
Conformance verification for neural network models of glucose-insulin dynamics
Taisa Kushner,
Sriram Sankaranarayanan,
and Marc Breton
In Hybrid Systems: Computation and Control (HSCC),
pp. 13:1â13:12,
2020.
Neural networks present a useful framework for learning complex
dynamics, and are increasingly being considered as components to
closed loop predictive control algorithms. However, if they are to be
utilized in such safety-critical advisory settings, they must be
provably âconformantâ to the governing scientific (biological,
chemical, physical) laws which underlie the modeled process.
Unfortunately, this is not easily guaranteed as neural network models
are prone to learn patterns which are artifacts of the conditions
under which the training data is collected, which may not necessarily
conform to underlying physiological laws.
In this work, we utilize a formal range-propagation based approach for
checking whether neural network models for predicting future blood
glucose levels of individuals with type-1 diabetes are monotonic in
terms of their insulin inputs. These networks are increasingly part of
closed loop predictive control algorithms for âartificial pancreasâ
devices which automate control of insulin delivery for individuals
with type-1 diabetes. Our approach considers a key property that blood
glucose levels must be monotonically decreasing with increasing
insulin inputs to the model. Multiple representative neural network models for blood glucose
prediction are trained and tested on real patient data, and
conformance is tested through our verification approach. We observe
that standard approaches to training networks result in models which
violate the core relationship between insulin inputs and glucose
levels, despite having high prediction accuracy. We propose an
approach that can learn conformant models without much loss in
accuracy.
Reachability Analysis using Message Passing over Tree Decompositions.
Sriram Sankaranarayanan
In International Conference on Computer-Aided Verification (CAV),
Lecture Notes in Computer Science (LNCS)
Vol. 12224,
pp. 604â628,
2020.
In this paper, we study efficient approaches to reachability analysis
for discrete-time nonlinear dynamical systems when the dependencies
among the variables of the system have low treewidth. Reachability
analysis over nonlinear dynamical systems asks if a given set of
target states can be reached, starting from an initial set of
states. This is solved by computing conservative over approximations
of the reachable set using abstract domains to represent these
approximations. However, most approaches must tradeoff the level of
conservatism against the cost of performing analysis, especially when
the number of system variables increases. This makes reachability
analysis challenging for nonlinear systems with a large number of
state variables. Our approach works by constructing a dependency graph
among the variables of the system. The tree decomposition of this
graph builds a tree wherein each node of the tree is labeled with
subsets of the state variables of the system. Furthermore, the tree
decomposition satisfies important structural properties. Using the
tree decomposition, our approach abstracts a set of states of the high
dimensional system into a tree of sets of lower dimensional
projections of this state. We derive various properties of
this abstract domain, including conditions under which the original
high dimensional set can be fully recovered from its low dimensional
projections. Next, we use ideas from message passing developed
originally for belief propagation over Bayesian networks to perform
reachability analysis over the full state space in an efficient
manner. We illustrate our approach on some interesting nonlinear
systems with low treewidth to demonstrate the advantages of our
approach.
Unbounded-Time Safety Verification of Stochastic Differential Dynamics
Shenghua Feng,
Mingshuai Chen,
Bai Xue,
Sriram Sankaranarayanan,
and Naijun Zhan
In International Conference on Computer-Aided Verification (CAV),
Lecture Notes in Computer Science (LNCS)
Vol. 12225,
pp. 327â348,
2020.
In this paper, we propose a method for bounding the probability that a
stochastic differential equation (SDE) system violates a safety specification over
the infinite time horizon. SDEs are mathematical models of stochastic processes
that capture how states evolve continuously in time. They are widely used in
numerous applications such as engineered systems (e.g., modeling how pedestrians move in an intersection), computational finance (e.g., modeling stock option
prices), and ecological processes (e.g., population change over time). Previously
the safety verification problem has been tackled over finite and infinite time horizons using a diverse set of approaches. The approach in this paper attempts to
connect the two views by first identifying a finite time bound, beyond which the
probability of a safety violation can be bounded by a negligibly small number.
This is achieved by discovering an exponential barrier certificate that proves exponentially converging bounds on the probability of safety violations over time.
Once the finite time interval is found, a finite-time verification approach is used
to bound the probability of violation over this interval. We demonstrate our approach over a collection of interesting examples from the literature, wherein our
approach can be used to find tight bounds on the violation probability of safety
properties over the infinite time horizon.
2019
Predictive Runtime Monitoring for Linear Stochastic Systems and Applications to Geofence Enforcement for UAVs
Hansol Yoon,
Yi Chou,
Xin Chen,
Eric Frew,
and Sriram Sankaranarayanan
In International Conference on Runtime Verification (RV),
Vol. 11757,
pp. 349â367,
2019.
We propose a predictive runtime monitoring approach for linear systems with stochastic disturbances. The goal of the monitor is to decide if there exists a possible sequence of control inputs over a given time horizon to ensure that a safety property is maintained with a sufficiently high probability. We derive an efficient algorithm for performing the predictive monitoring in real time,
specifically for linear time invariant (LTI) systems driven by stochastic
disturbances. The algorithm implicitly defines a control envelope set such that
if the current control input to the system lies in this set, there exists a
future strategy over a time horizon consisting of the next N steps to
guarantee the safety property of interest. As a result, the proposed monitor is
oblivious of the actual controller, and therefore, applicable even in the
presence of complex control systems including highly adaptive controllers.
Furthermore, we apply our proposed approach to monitor whether a UAV will
respect a "geofence" defined by a geographical region over which the vehicle
may operate. To achieve this, we construct a data-driven linear model of the
UAVs dynamics, while carefully modeling the uncertainties due to wind, GPS
errors and modeling errors as time-varying disturbances. Using realistic data
obtained from flight tests, we demonstrate the advantages and drawbacks of the
predictive monitoring approach.
Bayesian parameter estimation for nonlinear ODEs using sensitivity analysis
Yi Chou,
and Sriram Sankaranarayanan
In International Joint Conference on Artificial Intelligence (IJCAI),
pp. 5708-5714,
2019.
We investigate approximate Bayesian inference techniques for
nonlinear systems described by ordinary differential equation (ODE) models.
In particular, the approximations will be based on set-valued reachability analysis
approaches, yielding approximate models for the posterior distribution.
Nonlinear ODEs are widely used to
mathematically describe physical and biological models. However,
these models are often described by parameters that are not directly
measurable and have an impact on the system behaviors. Often, noisy
measurement data combined with physical/biological intuition serve as the
means for finding appropriate values of these parameters.
Our approach operates under a Bayesian framework, given prior
distribution over the parameter space and noisy observations under a
known sampling distribution. We explore subsets of the space of
model parameters, computing bounds on the likelihood for each
subset. This is performed using nonlinear set-valued reachability
analysis that is made faster by means of linearization around a
reference trajectory. The tiling of the parameter space can be
adaptively refined to make bounds on the likelihood tighter. We evaluate our
approach on a variety of nonlinear benchmarks and compare our
results with Markov Chain Monte Carlo and Sequential Monte Carlo approaches.
Formal Techniques for Verification and Testing of Cyber-Physical Systems
Jyotirmoy Deshmukh,
and Sriram Sankaranarayanan
In Design Automation of Cyber-Physical Systems ( Edited by Arquimedes Canedo and Mohammad Al Faruque) ,
pp. 69-105,
2019.
Modern cyber-physical systems (CPS) are often developed in a model- based development (MBD) paradigm. The MBD paradigm involves the construc- tion of different kinds of models: (1) a plant model that encapsulates the physical components of the system (e.g. mechanical, electrical, chemical components) using representations based on differential and algebraic equations, (2) a controller model that encapsulates the embedded software components of the system, and (3) an envi- ronment model that encapsulates physical assumptions on the external environment of the CPS application. In order to reason about the correctness of CPS applications, we typically pose the following question: For all possible environment scenarios, does the closed-loop system consisting of the plant and the controller exhibit the desired behavior? Typically, the desired behavior is expressed in terms of properties that specify unsafe behaviors of the closed-loop system. Often, such behaviors are expressed using variants of real-time temporal logics. In this chapter, we will examine formal methods based on bounded-time reachability analysis, simulation-guided reachability analysis, deductive techniques based on safety invariants, and formal, requirement-driven testing techniques. We will review key results in the literature, and discuss the scalability and applicability of such systems to various academic and industrial contexts. We conclude this chapter by discussing the challenge to for- mal verification and testing techinques posed by newer CPS applications that use AI-based software components.
Models, Devices, Properties and Verification of Artificial Pancreas Systems
Taisa Kushner,
B. Wayne Bequette,
Faye Cameron,
Gregory Forlenza,
David Maahs,
and Sriram Sankaranarayanan
In Automated Reasoning for Systems Biology And Medicine (Edited by Paulo Zuliani and Pietro Lio),
pp. 93-131,
2019.
In this chapter, we present the interplay between models of human
physiology, closed loop medical devices, correctness specifications
and verification algorithms in the context of the artificial
pancreas. The artificial pancreas refers to a series of increasingly
sophisticated closed loop medical devices that automate the delivery
of insulin to people with type-1 diabetes. On one hand, they hold
the promise of easing the everyday burden of managing type-1
diabetes. On the other, they expose the patient to potentially
deadly consequences of incorrect insulin delivery that could lead to
coma or even death in the short term, or damage to critical organs
such as the eyes, kidneys and the heart in the longer
term. Verifying the correctness of these devices involves a careful
modeling of human physiology, the medical device, and the surrounding
disturbances at the right level of abstraction. We first provide a
brief overview of insulin glucose regulation and the spectrum of
associated mathematical models. At one end are physiological models
that try to capture the transport, metabolism, uptake and
interactions of insulin and glucose. On the end are data driven
models which include time series models and neural networks. The
first part of the chapter examines some of these models in detail in
order to provide a basis for verifying medical devices. Next, we
present some of the devices which are commonly used in blood glucose
control, followed by a specification of key correctness properties
and performance measures. Finally, we examine the application of
some of the state-of-the-art approaches to verification and
falsification of these properties to the models and devices
considered. We conclude with a brief presentation on future
directions for next generation artificial pancreas, and the
challenges involved in reasoning about them.
Trajectory Tracking Control for Robotic Vehicles using Counterexample Guided Training of Neural Networks
Arthur Claviere,
Souradeep Dutta,
and Sriram Sankaranarayanan
In International Conference on Automated Planning and Scheduling (ICAPS),
pp. 680-688,
2019.
We investigate approaches to train neural networks for
controlling vehicles to follow a fixed reference
trajectory robustly, while respecting limits on their velocities
and accelerations. Here robustness means that if a vehicle starts
inside a fixed region around the reference trajectory, it remains within
this region while moving along the reference from an initial set to a
target set. We consider the combination of two ideas in this paper: (a)
demonstrations of the correct control obtained from a model-predictive
controller (MPC) and (b) falsification approaches that actively search
for violations of the property, given a current candidate. Thus, our
approach creates an initial training set using the MPC loop and
builds a first candidate neural network controller. This controller
is repeatedly analyzed using falsification that searches for
counterexample trajectories, and the resulting counterexamples are used
to create new training examples. This process proceeds iteratively
until the falsifier no longer succeeds within a given computational
budget. We propose falsification approaches using a combination of
random sampling and gradient descent to systematically search for
violations. We evaluate our combined approach on a variety of
benchmarks that involve controlling dynamical models of cars and
quadrotor aircraft.
Factory-Calibrated Continuous Glucose Monitoring: How and Why It Works, and the Dangers of Reuse Beyond Approved Duration of Wear
Gregory Forlenza,
Taisa Kushner,
Laurel Messer,
R. Paul Wadwa,
and Sriram Sankaranarayanan
Diabetes: Technology and Therapeutics,
21
(4)
pp. 222-229,
2019.
Continuous glucose monitors (CGM) display real-time glucose values enabling greater
glycemic awareness with reduced management burden. Factory-calibrated CGM systems
allow for glycemic assessment without the pain and inconvenience of fingerstick
glucose testing. Advances in sensor chemistry and CGM algorithms have enabled
factory-calibrated systems to have greater accuracy than previous generations of
CGM technology. Despite these advances many patients and providers are hesitant
about the idea of removing fingerstick testing from their diabetes care. In this
commentary, we aim to review the clinical trials on factory-calibrated CGM systems,
present the algorithms which facilitate factory-calibrated CGMs to improve accuracy,
discuss clinical use of factory-calibrated CGMs, and finally present two cases
demonstrating the dangers of utilizing exploits in commercial systems to
prolong sensor life.
Formal Policy Learning from Demonstrations
Hadi Ravanbakhsh,
Sriram Sankaranarayanan,
and Sanjit A. Seshia
In International Conference on Robotics and Automation (ICRA),
pp. 6037-6043,
2019.
We consider the problem of learning structured, closed-loop policies
(feedback laws) from demonstrations in order to control underactuated
robotic systems, so that formal behavioral specifications such as
reaching a target set of states are satisfied. Our approach uses a
âcounterexample-guidedâ iterative loop that involves the interaction
between a policy learner, a demonstrator and a verifier. The learner
is responsible for querying the demonstrator in order to obtain the
training data to guide the construction of a policy candidate.
This candidate is analyzed by the verifier and either accepted as
correct, or rejected with a counterexample. In the latter case, the
counterexample is used to update the training data and further refine
the policy.
The approach is instantiated using receding horizon model-predictive
controllers (MPCs) as demonstrators. Rather than using regression
to fit a policy to the demonstrator actions, we extend the MPC
formulation with the gradient of the cost-to-go function evaluated at
sample states in order to constrain the set of policies compatible
with the behavior of the demonstrator.
We demonstrate the successful application of the resulting policy
learning schemes on two case studies
and we show how simple, formally-verified policies can be inferred
starting from a complex and unverified nonlinear MPC implementations.
As a further benefit, the policies are many orders of magnitude
faster to implement when compared to the original MPCs.
We present an approach to construct reachable set overapproximations for
continuous-time dynamical systems controlled using neural network
feedback systems. Feedforward deep neural networks are now widely
used as a means for learning control laws through techniques such as
reinforcement learning and data-driven predictive control. However,
the learning algorithms for these networks do not guarantee
correctness properties on the resulting closed-loop systems. Our
approach seeks to construct overapproximate reachable sets by
integrating a Taylor model-based flowpipe construction scheme for
continuous differential equations with an approach that replaces the
neural network feedback law for a small subset of inputs by a
polynomial mapping. We generate the polynomial mapping using
regression from input-output samples. To ensure soundness, we
rigorously quantify the gap between the output of the network and
that of the polynomial model. We demonstrate the effectiveness of our
approach over a suite of benchmark examples ranging from 2 to 17
state variables, comparing our approach with alternative ideas based
on range analysis.
2018
Template polyhedra and bilinear optimization
Jessica Gronski,
Mohamed-Amin Ben Sassi,
Stephen Becker,
and Sriram Sankaranarayanan
Formal Methods in System Design,
54
pp. 27-63,
2018.
In this paper, we study the template polyhedral abstract domain using connections to bilinear optimization techniques. The connections between abstract interpretation and convex optimization approaches have been studied for nearly a decade now. Specifically, data flow constraints for numerical domains such as polyhedra can be expressed in terms of bilinear constraints. Algorithms such as policy and strategy iteration have been proposed for the special case of bilinear constraints that arise from template polyhedra wherein the desired invariants conform to a fixed template form. In particular, policy iteration improves upon a known post-fixed point by alternating between solving for an improved post-fixed point against finding certificates that are used to prove the new fixed point. In the first part of this paper, we propose a policy iteration scheme that changes the template on the fly in order to prove a target reachability property of interest. We show how the change to the template naturally fits inside a policy iteration scheme, and thus, propose a scheme that updates the template matrices associated with each program location. We demonstrate that the approach is effective over a set of benchmark instances, wherein, starting from a simple predefined choice of templates, the approach is able to infer appropriate template directions to prove a property of interest. However, it is well known that policy iteration can end up âstuckâ in a saddle point from which future iterations cannot make progress. In the second part of this paper, we study this problem further by empirically comparing policy iteration with a variety of other approaches for bilinear programming. These approaches adapt well-known algorithms to the special case of bilinear programs as well as using off-the-shelf tools for nonlinear programming. Our initial experience suggests that policy iteration seems to be the most advantageous approach for problems arising from abstract interpretation, despite the potential problems of getting stuck at a saddle point.
Learning Control Lyapunov Functions from Counterexamples and Demonstrations
Hadi Ravanbakhsh,
and Sriram Sankaranarayanan
Autonomous Robots,
43
pp. 275â307,
2018.
Note: Special Issue for Robotics: Science and Systems (Online First Article).
We present a technique for learning control Lyapunov-like functions, which are used in turn to synthesize controllers for nonlinear dynamical systems that can stabilize the system, or satisfy specifications such as remaining inside a safe set, or eventually reaching a target set while remaining inside a safe set. The learning framework uses a demonstrator that implements a black-box, untrusted strategy presumed to solve the problem of interest, a learner that poses finitely many queries to the demonstrator to infer a candidate function, and a verifier that checks whether the current candidate is a valid control Lyapunov-like function. The overall learning framework is iterative, eliminating a set of candidates on each iteration using the counterexamples discovered by the verifier and the demonstrations over these counterexamples. We prove its convergence using ellipsoidal approximation techniques from convex optimization. We also implement this scheme using nonlinear MPC controllers to serve as demonstrators for a set of state and trajectory stabilization problems for nonlinear dynamical systems. We show how the verifier can be constructed efficiently using convex relaxations of the verification problem for polynomial systems to semi-definite programming problem instances. Our approach is able to synthesize relatively simple polynomial control Lyapunov-like functions, and in that process replace the MPC using a guaranteed and computationally less expensive controller.
Specification-Based Monitoring of Cyber-Physical Systems: A Survey on Theory, Tools and Applications
The term Cyber-Physical Systems (CPS) typically refers to
engineered, physical and biological systems monitored and/or controlled
by an embedded computational core. The behaviour of a CPS over time
is generally characterised by the evolution of physical quantities, and
discrete software and hardware states. In general, these can be mathematically
modelled by the evolution of continuous state variables for
the physical components interleaved with discrete events. Despite large
effort and progress in the exhaustive verification of such hybrid systems,
the complexity of CPS models limits formal verification of safety of their
behaviour only to small instances. An alternative approach, closer to
the practice of simulation and testing, is to monitor and to predict CPS
behaviours at simulation-time or at runtime. In this chapter, we summarise
the state-of-the-art techniques for qualitative and quantitative
monitoring of CPS behaviours. We present an overview of some of the
important applications and, finally, we describe the tools supporting CPS
monitoring and compare their main features.
Path-Following through Control Funnel Functions
Hadi Ravanbakhsh,
Sina Aghli,
Christoffer Heckman,
and Sriram Sankaranarayanan
In Intelligent Robotics and Systems (IROS),
pp. 401-408,
2018.
We present an approach to path following using so-called control
funnel functions. Synthesizing controllers to "robustly" follow a reference
trajectory is a fundamental problem for autonomous vehicles. Robustness, in this context,
requires our controllers to handle a specified amount of deviation from
the desired trajectory. Our approach considers a timing law that
describes how fast to move along a given reference trajectory and a
control feedback law for reducing deviations from the reference. We synthesize
both feedback laws using "control funnel functions" that jointly encode the
control law as well as its correctness argument over a mathematical
model of the vehicle dynamics. We adapt a previously described
demonstration-based learning algorithm to synthesize a control
funnel function as well as the associated feedback law. We implement
this law on top of a 1/8th scale autonomous vehicle called the
Parkour car. We compare the performance of our path following
approach against a trajectory tracking approach by specifying
trajectories of varying lengths and curvatures. Our experiments
demonstrate the improved robustness obtained from the use of
control funnel functions.
In this paper, we provide an approach to data-driven
control for artificial pancreas system by learning
neural network models of human insulin-glucose
physiology from available patient data and using a
mixed integer optimization approach to control blood
glucose levels in real-time using the inferred
models. First, our approach learns neural networks
to predict the future blood glucose values from
given data on insulin infusion and their resulting
effects on blood glucose levels. However, to provide
guarantees on the resulting model, we use quantile
regression to fit multiple neural networks that
predict upper and lower quantiles of the future
blood glucose levels, in addition to the mean.
Using the inferred set of neural networks, we
formulate a model-predictive control scheme that
adjusts both basal and bolus insulin delivery to
ensure that the risk of harmful hypoglycemia and
hyperglycemia are bounded using the quantile models
while the mean prediction stays as close as possible
to the desired target. We discuss how this scheme
can handle disturbances from large unannounced meals
as well as infeasibilities that result from
situations where the uncertainties in future glucose
predictions are too high. We experimentally
evaluate this approach on data obtained from a set
of 17 patients over a course of 40 nights per
patient. Furthermore, we also test our approach
using neural networks obtained from virtual patient
models available through the UVA-Padova simulator
for type-1 diabetes.
Learning and Verification of Feedback Control Systems using Feedforward Neural Networks
Souradeep Dutta,
Susmit Jha,
Sriram Sankaranarayanan,
and Ashish Tiwari
In IFAC Conference on Analysis and Design of Hybrid Systems (ADHS),
Vol. 51,
pp. 151-156,
2018.
We present an approach to learn and formally verify
feedback laws for data-driven models of neural networks. Neural
networks are emerging as powerful and general data-driven
representations for functions. This has led to their increased use
in data-driven plant models and the representation of feedback laws
in control systems. However, it is hard to formally verify
properties of such feedback control systems. The proposed learning
approach uses a receding horizon formulation that samples from the
initial states and disturbances to enforce desirable properties such
as reachability, safety and stability. Next, our verification
approach uses an over-approximate reachability analysis over the
system, using template polyhedra that is supported by range analysis
for deep feedforward neural networks. We report promising results
obtained by applying our techniques on several challenging nonlinear
dynamical systems.
Mining framework usage graphs from app corpora
Sergio Mover,
Sriram Sankaranarayanan,
Rhys Olsen,
and Bor-Yuh Evan Chang
In International Conference on Software Analysis, Evolution and Reengineering (SANER),
pp. 277-289,
2018.
Note: Winner of IEEE TCSE Distinguished Paper Award.
We investigate the problem of mining graph-based usage patterns for large, object-oriented frameworks like Androidârevisiting previous approaches based on graph-based object usage models (groums). Groums are a promising approach to represent usage patterns for object-oriented libraries because they simultaneously describe control flow and data dependencies between methods of multiple interacting object types. However, this expressivity comes at a cost: mining groums requires solving a subgraph isomorphism problem that is well known to be expensive. This cost limits the applicability of groum mining to large API frameworks. In this paper, we employ groum mining to learn usage patterns for object-oriented frameworks from program corpora. The central challenge is to scale groum mining so that it is sensitive to usages horizontally across programs from arbitrarily many developers (as opposed to simply usages vertically within the program of a single developer). To address this challenge, we develop a novel groum mining algorithm that scales on a large corpus of programs. We first use frequent itemset mining to restrict the search for groums to smaller subsets of methods in the given corpus. Then, we pose the subgraph isomorphism as a SAT problem and apply efficient pre-processing algorithms to rule out fruitless comparisons ahead of time. Finally, we identify containment relationships between clusters of groums to characterize popular usage patterns in the corpus (as well as classify less popular patterns as possible anomalies). We find that our approach scales on a corpus of over five hundred open source Android applications, effectively mining obligatory and best-practice usage patterns.
A Data-Driven Approach to Artificial Pancreas Verification and Synthesis.
Taisa Kushner,
David Bortz,
David Maahs,
and Sriram Sankaranarayanan
In Intl. Conference on Cyber-Physical Systems (ICCPSâ18),
pp. 242-252,
2018.
This paper presents a case study of a data driven approach to
verification and parameter synthesis for artificial pancreas control
systems which deliver insulin to patients with type-1 diabetes
(T1D). We present a new approach to tuning parameters using
non-deterministic data-driven models for human insulin-glucose
regulation, which are inferred from patient data using multiple time
scales. Taking these equations as constraints, we model the behavior
of the entire closed loop system over a five-hour time horizon cast
as an optimization problem. Next, we demonstrate this approach using
patient data gathered from a previously conducted outpatient
clinical study involving insulin and glucose data collected from
50 patients with T1D and 40 nights per patient. We use the
resulting data-driven models to predict how the patients would
perform under a PID-based closed loop system which forms the basis
for the first commercially available hybrid closed loop
device. Futhermore, we provide a re-tuning methodology which can
potentially improve control for 82% of patients, based on the
results of an exhaustive reachability analysis. Our results
demonstrate that simple nondeterministic models allow us to
efficiently tune key controller parameters, thus paving the way for
interesting clinical translational applications.
Output Range Analysis for Deep Feedforward Neural Networks
Souradeep Dutta,
Susmit Jha,
Sriram Sankaranarayanan,
and Ashish Tiwari
In Proceedings of NASA Formal Methods Symposium (NFM),
Lecture Notes In Computer Science
Vol. 10811,
pp. 121-138,
2018.
Given a neural network (NN) and a set of possible inputs to the
network described by polyhedral constraints, we aim to compute a
safe over-approximation of the set of possible output values. This
operation is a fundamental primitive enabling the formal analysis of
neural networks that are extensively used in a variety of machine
learning tasks such as perception and control of autonomous systems.
Increasingly, they are deployed in high-assurance applications,
leading to a compelling use case for formal verification approaches.
In this paper, we present an efficient range estimation algorithm
that iterates between an expensive global combinatorial search using
mixed-integer linear programming problems, and a relatively
inexpensive local optimization that repeatedly seeks a local optimum
of the function represented by the NN. We implement our approach and
compare it with Reluplex, a recently proposed solver for deep neural
networks. We demonstrate applications of our approach to computing
flowpipes for neural network-based feedback controllers. We show
that the use of local search in conjunction with mixed-integer
linear programming solvers effectively reduces the combinatorial
search over possible combinations of active neurons in the network
by pruning away suboptimal nodes.
The predictive monitoring problem asks whether a deployed system is
likely to fail over the next T seconds under some environmental
conditions. This problem is of the utmost importance in
cyber-physical systems and has inspired real-time architectures
capable of adapting to such failures upon forewarning. In this
paper, we present a linear model-predictive scheme for the real-time
monitoring of linear systems governed by time-triggered controllers
and time-varying disturbances. The scheme uses a combination of
offline (advance) and online computations to decide if a given plant
model has entered a state from which no matter what control is
applied, the disturbance has a strategy to drive the system to an
unsafe region. Our approach is independent of the control strategy
used: this allows us to deal with plants that are controlled using
model-predictive control techniques or even opaque machine-learning
based control algorithms that are hard to reason with using existing
reachable set estimation algorithms. Our online computation has a
simplified efficient framework which reuses the symbolic reachable sets
computed offline. The real-time monitor instantiates the reachable set
with a concrete state estimate, and repeatedly performs emptiness
checks with respect to a safety property. We classify the various alarms
raised by our approach in terms of what they imply about the system
as a whole.
We implement our real-time monitoring approach over numerous linear system
benchmarks and show that the computation can be performed rapidly in
practice. Furthermore, we also examine the alarms reported by our approach
and show how some of the alarms can be used to improve the controller in an
offline fashion.
Template Polyhedra with a Twist
Sriram Sankaranarayanan,
and Mohamed Amin Ben Sassi
In Static Analysis Symposium (SAS),
Lecture Notes in Computer Science
Vol. 10422,
pp. 321-341,
2017.
In this paper, we draw upon connections between bilinear programming
and the process of computing (post) fixed points in abstract
interpretation. It is well-known that the data flow constraints for
numerical domains are expressed in terms of bilinear
constraints. Algorithms such as policy and strategy iteration have
been proposed for the special case of bilinear constraints that
arise from template numerical domains. In particular, policy
iteration improves upon a known post-fixed point by alternating
between solving for an improved post-fixed point against finding
certificates that are used to prove the new fixed point.
In this paper, we draw upon these connections to formulate a policy
iteration scheme that changes the template on the fly in order to
prove a target reachability property of interest. We show how the
change to the template naturally fits inside a policy iteration
scheme, and thus propose a policy iteration scheme that updates the
template matrices associated with each program location. We
demonstrate that the approach is effective over a set of benchmark
instances, wherein starting from a simple predefined choice of
templates, the approach is able to infer appropriate template
directions to prove a property of interest. We also note some key
theoretical questions regarding the convergence of the policy
iteration scheme with template updates, that remain open at this
time.
Compositional Relational Abstraction for Nonlinear Systems
Xin Chen,
Sergio Mover,
and Sriram Sankaranarayanan
ACM Transactions on Embedded Computing Systems (Special Issue for EMSOFT 2017),
16(5s)
pp. 187,
2017.
We propose techniques to construct abstractions for nonlinear
dynamics in terms of relations expressed in linear arithmetic. Such
relations are useful for translating the closed loop verification
problem of control software with continuous-time, nonlinear plant
models into discrete and linear models that can be handled by
efficient software verification approaches for discrete-time
systems. We construct relations using Taylor model based flowpipe
construction and the systematic composition of relational
abstractions for smaller components. We focus on developing
efficient schemes for the special case of composing abstractions for
linear and nonlinear components. We implement our ideas using a
relational abstraction system, using the resulting abstraction
inside the verification tool NuXMV, which implements numerous
SAT/SMT solver-based verification techniques for discrete
systems. Finally, we evaluate the application of relational
abstractions for verifying properties of time triggered controllers,
comparing with the Flow* tool. We conclude that relational
abstractions are a promising approach towards nonlinear hybrid
system verification, capable of proving properties that are beyond
the reach of tools such as Flow*. At the same time, we highlight
the need for improvements to existing linear arithmetic SAT/SMT
solvers to better support reasoning with large relational
abstractions.
Learning Lyapunov (Potential) Functions from Counterexamples and Demonstrations
Hadi Ravanbakhsh,
and Sriram Sankaranarayanan
In Robotics Science and Systems (RSS),
pp. 16,
2017.
We present a technique for learning control Lyapunov (potential)
functions, which are used in turn to synthesize controllers
for nonlinear dynamical systems. The learning framework uses a
/demonstrator/ that implements a black-box, untrusted strategy
presumed to solve the problem of interest, a /learner/ that
poses finitely many queries to the demonstrator to infer a candidate
function and a /verifier/ that checks whether the current
candidate is a valid control Lyapunov function. The overall learning
framework is iterative, eliminating a set of candidates on each
iteration using the counterexamples discovered by the verifier and
the demonstrations over these counterexamples. We prove its
convergence using ellipsoidal approximation techniques from convex
optimization. We also implement this scheme using nonlinear MPC
controllers to serve as demonstrators for a set of state and
trajectory stabilization problems for nonlinear dynamical systems. Our
approach is able to synthesize relatively simple polynomial control
Lyapunov functions, and in that process replace the MPC using a
guaranteed and computationally less expensive controller.
Discriminating Traces with Time
Saeid Tizpaz-Niari,
Pavol Cerny,
Bor-Yuh Evan Chang,
Sriram Sankaranarayanan,
and Ashutosh Trivedi
In Tools and Algorithms for Construction and Analysis of Systems (TACAS),
Vol. 10206,
pp. 21-37,
2017.
What properties about the internals of a program explain
the possible differences in its overall running time
for different inputs? In this paper, we propose a
formal framework for considering this question we
dub trace-set discrimination. We show that even
though the algorithmic problem of computing maximum
likelihood discriminants is NP-hard, approaches
based on integer linear programming (ILP) and
decision tree learning can be useful in zeroing-in
on the program internals. On a set of Java
benchmarks, we find that compactlyârepresented
decision trees scalably discriminate with high
accuracyâmore scalably than maximum likelihood
discriminants and with comparable accuracy. We
demonstrate on three larger case studies how
decision-tree discriminants produced by our tool are
useful for debugging timing side-channel
vulnerabilities (i.e., where a malicious observer
infers secrets simply from passively watching
execution times) and availability vulnerabilities.
Formal Verification of a Multi-Basal Insulin Infusion Control Model.
Xin Chen,
Souradeep Dutta,
and Sriram Sankaranarayanan
In Workshop on Applied Verification of Hybrid Systems (ARCH),
pp. 16,
2017.
The artificial pancreas concept automates the delivery of
insulin to patients with type-1 diabetes, sensing the blood glucose
levels through a continuous glucose monitor (CGM) and using an
insulin infusion pump to deliver insulin. Formally verifying control
algorithms against physiological models of the patient is an
important challenge. In this paper, we present a case study of a
simple hybrid multi-basal control system that switches to different
preset insulin delivery rates over various ranges of blood glucose
levels. We use the Dalla-Man model for modeling the physiology of
the patient and a hybrid automaton model of the controller. First,
we reduce the problem state space and replace nonpolynomial terms by
approximations with very small errors in order to simplify the
model. Nevertheless, the model still remains nonlinear with up to 9
state variables.
Reachability analysis on this hybrid model is used to verify that
the blood glucose levels remain within a safe range overnight. This
poses challenges, including (a) the model exhibits many discrete
jumps in a relatively small time interval, and (b) the entire time
horizon corresponding to a full night is 720 minutes, wherein the
controller time period is 5 minutes. To overcome these difficulties,
we propose methods to effectively handle time-triggered jumps and
merge flowpipes over the same time interval. The evaluation shows
that the performance can be improved with the new techniques.
Analyzing Neighborhoods of Falsifying Traces in Cyber-Physical Systems
Ram Das Diwakaran,
Sriram Sankaranarayanan,
and Ashutosh Trivedi
In Intl. Conference on Cyber-Physical Systems (ICCPS),
pp. 109-119,
2017.
We study the problem of analyzing falsifying traces of
cyber-physical systems.
Specifically, given a system model and an input which is a counterexample
to a property of interest, we wish to understand which parts of the inputs
are âresponsibleâ for the counterexample as a whole.
Whereas this problem is well known to be hard to solve precisely, we provide an
approach based on learning from repeated simulations of the system under test.
Our approach generalizes the classic concept of âone-at-a-timeâ sensitivity
analysis used in the risk and decision analysis community to understand how
inputs to a system influence a property in question.
Specifically, we pose the problem as one of finding a neighborhood of inputs
that contains the falsifying counterexample in question, such that each point
in this neighborhood corresponds to a falsifying input with a high probability.
We use ideas from statistical hypothesis testing to infer and validate such
neighborhoods from repeated simulations of the system under test.
This approach not only helps to understand the sensitivity of
these counterexamples to various parts of the inputs, but also
generalizes or widens the given counterexample by returning a
neighborhood of counterexamples around it.
We demonstrate our approach on a series of increasingly complex examples
from automotive and closed loop medical device domains.
We also compare our
approach against related techniques based on regression and machine learning.
2016
Decomposed Reachability Analysis for Nonlinear Systems
Xin Chen,
and Sriram Sankaranarayanan
In IEEE Real Time Systems Symposium (RTSS),
pp. 13â24,
2016.
We introduce an approach to conservatively abstract a nonlinear
continuous system by a hybrid automaton whose continuous dynamics are
given by a decomposition of the original dynamics. The decomposed
dynamics is in the form of a set of lower-dimensional ODEs with
time-varying uncertainties whose ranges are defined by the
hybridization domains. We propose several techniques in the paper to
effectively compute abstractions and flowpipe
overapproximations. First, a novel method is given to reduce the
overestimation accumulation in a Taylor model flowpipe construction
scheme. Then we present our decomposition method, as well as the
framework of on-the-fly hybridization. A combination of the two
techniques allows us to handle much larger, nonlinear systems
with comparatively large initial sets. Our prototype implementation
is compared with existing reachability tools for offline and online flowpipe construction
on challenging
benchmarks of dimensions ranging from 7 to 30, with favorable
results.
Validating Numerical Semidefinite Programming Solvers for Polynomial Invariants
Pierre Roux,
Yuen-Lam Voronin,
and Sriram Sankaranarayanan
In Static Analysis Symposium (SAS),
Lecture Notes in Computer Science
Vol. 9837,
pp. 424â446,
2016.
Semidefinite programming (SDP) solvers are increasingly used as
primitives in many program verification tasks to synthesize and verify
polynomial invariants for a variety of systems including programs,
hybrid systems and stochastic models. On one hand, they provide a
tractable alternative to reasoning about semi-algebraic
constraints. However, the results are often unreliable due to
ânumerical issuesâ that include a large number of reasons such as
floating point errors, ill-conditioned problems, failure of strict
feasibility, and more generally, the specifics of the algorithms used
to solve SDPs. These issues influence whether the final numerical
results are trustworthy. In this paper, we briefly survey the emerging use of
SDP solvers in the static analysis community. We report on the perils
of using SDP solvers for common invariant synthesis tasks,
characterizing the common failures that can lead to unreliable
answers. Next, we demonstrate existing tools for guaranteed
semidefinite programming that often prove inadequate to our
needs. Finally, we present a solution for verified semidefinite
programming that can be used to check the reliability of the solution
output by the solver and a padding procedure that can check the
presence of a feasible nearby solution to the one output by the
solver. We report on some successful preliminary experiments involving
our padding procedure.
Robust Controller Synthesis of Switched Systems Using Counterexample Guided Framework
Hadi Ravanbakhsh,
and Sriram Sankaranarayanan
In ACM/IEEE Conference on Embedded Software (EMSOFT),
pp. 8:1-8:10,
2016.
We investigate the problem of synthesizing robust controllers that
ensure that the closed loop satisfies an input reach-while-stay
specification, wherein all trajectories starting from some initial
set I, eventually reach a specified goal set G, while staying
inside a safe set S. Our plant model consists of a continuous-time
switched system controlled by an external switching signal and plant
disturbance inputs. The controller uses a state feedback law to
control the switching signal in order to ensure that the desired
correctness properties hold, regardless of the disturbance actions.
Our approach uses a proof certificate in the form of a robust
control Lyapunov-like function (RCLF) whose existence guarantees the
reach-while-stay specification. A counterexample guided inductive
synthesis (CEGIS) framework is used to find a RCLF by solving a
∃∀∃∀ formula iteratively using quantifier
free SMT solvers. We compare our synthesis scheme against a common
approach that fixes disturbances to nominal values and synthesizes
the controller, ignoring the disturbance. We demonstrate that the
latter approach fails to yield a robust controller over some
benchmark examples, whereas our approach does.
Finally, we consider the problem of translating the RCLF synthesized
by our approach into a control implementation. We outline the series
of offline and real-time computation steps needed. The synthesized
controller is implemented and simulated using the
Matlab(tm)/Simulink(tm) model-based design framework, and illustrated
on some examples.
Model-Based Falsification of an Artificial Pancreas Control System.
Sriram Sankaranarayanan,
Suhas Akshar Kumar,
Faye Cameron,
B. Wayne Bequette,
Georgios Fainekos,
and David M. Maahs
ACM SIGBED Review (Special Issue on Medical Cyber Physical Systems) ,
14
(2)
pp. 24-33,
2016.
We present a model-based falsification scheme for artificial
pancreas controllers. Our approach performs a closed-loop simulation
of the control software using models of the human insulin-glucose
regulatory system. Our work focuses on testing properties of an
overnight control system for hypoglycemia/hyperglycemia minimization
in patients with type-1 diabetes. This control system is currently
the subject of extensive phase II clinical trials.
We describe how the overall closed loop simulator is constructed, and
formulate properties to be tested. Significantly, the closed loop
simulation incorporates the control software, as is, without any
abstractions. Next, we demonstrate the use of a simulation-based
falsification approach to find potential property violations in the
resulting control system. We formulate a series of properties about
the controller behavior and examine the violations obtained. Using
these violations, we propose modifications to the controller software
to improve its performance under these adverse (corner-case)
scenarios. We also illustrate the effectiveness of robustness as a
metric for identifying interesting property violations. Finally, we
identify important open problems for future work.
Uncertainty Propagation using Probabilistic Affine Forms and Concentration of Measure Inequalities
Olivier Bouissou,
Eric Goubault,
Sylvie Putot,
Aleksandar Chakarov,
and Sriram Sankaranarayanan
In Tools and Algorithms for Construction and Analysis of Systems (TACAS),
Lecture Notes in Computer Science
Vol. 9636,
pp. 225-243,
2016.
We consider the problem of reasoning about the probability of
assertion violations in straight-line, nonlinear computations
involving uncertain quantities modeled as random variables. Such
computations are quite common in many areas such as cyber-physical
systems and numerical computation. Our approach extends
probabilistic affine forms, an interval-based calculus for precisely
tracking how the distribution of a given program variable depends on
uncertain inputs modeled as noise symbols. We extend probabilistic
affine forms using the precise tracking of dependencies between
noise symbols combined with the expectations and higher order
moments of the noise symbols. Next, we show how to prove bounds on
the probabilities that program variables take on specific values by
using concentration of measure inequalities. Thus, we enable a new
approach to this problem that explicitly avoids subdividing the
domain of inputs, as is commonly done in the related work. We
illustrate the approach in this paper on a variety of challenging
benchmark examples, and thus study its applicability to uncertainty
propagation.
Deductive Proofs of Almost Sure Persistence and Recurrence Properties
Aleksandar Chakarov,
Yuen-Lam (Vris) Voronin,
and Sriram Sankaranarayanan
In Tools and Algorithms for Construction and Analysis of Systems (TACAS),
Lecture Notes in Computer Science
Vol. 9636,
pp. 260-279,
2016.
Martingale theory yields a powerful set of tools that have recently
been used to prove quantitative properties of stochastic systems such
as stochastic safety and qualitative properties such as almost sure
termination. In this paper, we examine proof techniques for
establishing almost sure persistence and recurrence properties of
infinite-state discrete time stochastic systems.
A persistence property FG(P) specifies that almost all
executions of the stochastic system eventually reach P and stay
there forever. Likewise, a recurrence property GF(Q) specifies
that a target set Q is visited infinitely often by almost all
executions of the stochastic system. Our approach extends classic
ideas on the use of Lyapunov-like functions to establish persistence
and recurrence properties. Next, we extend known constraint-based
invariant synthesis techniques to deduce the necessary
supermartingale expressions to partly mechanize such proofs. We
illustrate our techniques on a set of interesting examples.
Symbolic-Numeric Reachability Analysis of Closed-Loop Control Software
Aditya Zutshi,
Sriram Sankaranarayanan,
Jyotirmoy Deshmukh,
and Xiaoqing Jin
In Hybrid Systems: Computation and Control (HSCC),
pp. 135-144,
2016.
Note: Winner of HSCC 2016 Best Student Paper Award.
We study the problem of falsifying reachability properties of real-time control software acting in a closed-loop with a given model of the plant dynamics. Our
approach employs numerical techniques to simulate a plant model, which may be highly nonlinear and hybrid, in combination with symbolic simulation of the
controller software. The state-space and input-space of the plant are systematically searched using a plant abstraction that is implicitly defined by
"quantization" of the plant state, but never explicitly constructed. Simultaneously, the controller behaviors are explored using a symbolic
execution of the control software. On-the-fly exploration of the overall closed-loop abstraction results in abstract counterexamples, which are used to
refine the plant abstraction iteratively until a concrete violation is found. Empirical evaluation of our approach shows its promise in treating controller
software that has precise, formal semantics, using an exact method such as symbolic execution, while using numerical simulations to produce abstractions
of the underlying plant model that is often an approximation of the actual plant. We also discuss a preliminary comparison of our approach with
techniques that are primarily simulation-based.
Change of Basis Abstractions for Non-Linear Hybrid Systems
Sriram Sankaranarayanan
Nonlinear Analysis: Hybrid Systems,
19
pp. 107-133,
2016.
We present abstraction techniques that transform a given non-linear dynamical system into a linear system, or more generally, an algebraic system described by polynomials of bounded degree, so that invariant properties of the resulting abstraction can be used to infer invariants for the original system. The abstraction techniques rely on a change-of-basis transformation that associates each state variable of the abstract system with a function involving the state variables of the original system. We present conditions under which a given change of basis transformation for a non-linear system can define an abstraction. Furthermore, the techniques developed here apply to continuous systems defined by Ordinary Differential Equations (ODEs), discrete systems defined by transition systems and hybrid systems that combine continuous as well as discrete subsystems.
The techniques presented here allow us to discover, given a non-linear system, if a change of bases transformation involving degree-bounded polynomials yielding an algebraic abstraction exists. If so, our technique yields the resulting abstract system, as well. Our techniques enable the use of analysis techniques for linear systems to infer invariants for non-linear systems. We present preliminary evidence of the practical feasibility of our ideas using a prototype implementation.
In this paper, we examine linear programming (LP) based relaxations for synthesizing polynomial Lyapunov functions to prove the stability of polynomial ODEs. A common approach to Lyapunov function synthesis starts from a desired parametric polynomial form of the polynomial Lyapunov function. Subsequently, we encode the positive-definiteness of the function, and the negative-definiteness of its derivative over the domain of interest. Therefore, the key primitives for this encoding include: (a) proving that a given polynomial is positive definite over a domain of interest, and (b) encoding the positive definiteness of a given parametric polynomial, as a constraint on the unknown parameters. We first examine two classes of relaxations for proving polynomial positivity: relaxations by sum-of-squares (SOS) programs, against relaxations that produce linear programs. We compare both types of relaxations by examining the class of polynomials that can be shown to be positive in each case.
Next, we present a progression of increasingly more powerful LP relaxations based on expressing the given polynomial in its Bernstein form, as a linear combination of Bernstein polynomials. The well-known bounds on Bernstein polynomials over the unit box help us formulate increasingly precise LP relaxations that help us establish the positive definiteness of a polynomial over a bounded domain. Subsequently, we show how these LP relaxations can be used to search for Lyapunov functions for polynomial ODEs by formulating LP instances. We compare our approaches to synthesizing Lyapunov functions with approaches based on SOS programming relaxations. The approaches are evaluated on a suite of benchmark examples drawn from the literature and automatically synthesized benchmarks. Our evaluation clearly demonstrates the promise of LP relaxations, especially for finding polynomial local Lyapunov functions that prove that the system is asymptotically stable over a given bounded region containing the equilibrium. In particular, the LP approach is shown to be as fast as the SOS programming approach, but less prone to numerical problems.
2015
Counter-Example Guided Synthesis of Control Lyapunov Functions For Switched Systems
Hadi Ravanbakhsh,
and Sriram Sankaranarayanan
In IEEE Control and Decision Conference (CDC),
pp. 4232-4239,
2015.
We investigate the problem of synthesizing switching controllers for
stabilizing continuous-time plants. First, we introduce a class of
control Lyapunov functions (CLFs) for switched systems along with a
switching strategy that yields a closed loop system with a
guaranteed minimum dwell time in each switching mode. However, the
challenge lies in automatically synthesizing appropriate
CLFs. Assuming a given fixed form for the CLF with unknown
coefficients, we derive quantified nonlinear constraints whose
feasible solutions (if any) correspond to CLFs for the original
system. However, solving quantified nonlinear constraints pose a challenge
to most LMI/BMI-based relaxations. Therefore, we investigate a
general approach called Counter-Example Guided Inductive Synthesis
(CEGIS), that has been widely used in the emerging area of automatic
program synthesis. We show how a LMI-based relaxation can be
formulated within the CEGIS framework for synthesizing CLFs. We
also evaluate our approach on a number of interesting benchmarks,
and compare the performance of the new approach with our previous
work that uses off-the-shelf nonlinear constraint solvers instead of
the LMI relaxation. The results shows synthesizing CLFs by using LMI
solvers inside a CEGIS framework can be a computational feasible
approach to synthesizing CLFs.
Refining the Closed Loop in the Data Age: Research-to-Practice Transitions in Diabetes Technology (Editorial)
Gregory P. Forlenza,
Sriram Sankaranarayanan,
and David M. Maahs
Diabetes Technology and Therapeutics,
17
(5)
pp. 304-306,
2015.
The editorial analyzes a recent clinical study reporting on
the impact of insulin pump shutoff on patients with
type-1 diabetes. In a broader context, it examines
the need for enhanced formal verification techniques
applied to more sophisticated artificial pancreas
controllers that remain under development.
Towards a Verified Artificial Pancreas: Challenges and Solutions for Runtime Verification
Fraser Cameron,
Georgios Fainekos,
David M. Maahs,
and Sriram Sankaranarayanan
In Proceedings of Runtime Verification (RVâ15),
Lecture Notes in Computer Science
Vol. 9333,
pp. 3-17,
2015.
In this paper, we briefly examine the recent developments in artificial pancreas controllers, that automate the delivery of insulin to patients with type-1 diabetes. We argue the need for offline and online runtime verification for these devices, and discuss challenges that make verification hard. Next, we examine a promising simulation-based falsification approach based on robustness semantics of temporal logics. These ideas are implemented in the tool S-Taliro that automatically searches for violations of metric temporal logic (MTL) requirements for Simulink(tm)/Stateflow(tm) models. We illustrate the use of S-Taliro for finding interesting property violations in a PID-based hybrid closed loop control system.
Simulation-Guided Parameter Synthesis for the Chance-Constrained Optimization of Control Systems
Yan Zhang,
Sriram Sankaranarayanan,
and Benjamin Gyori
In Proc. International Conference on Computer-Aided Design (ICCAD),
pp. 208-215,
2015.
We consider the problem of parameter synthesis for black-box systems whose operations are jointly influenced by a set of âtunable parametersâ under the control of designers, and a set of uncontrollable stochastic parameters. The goal is to find values of the tunable parameters that ensure the satisfaction of given performance requirements with a high probability. Such problems are common in robust system design, including feedback controllers, biomedical devices, and many others. These can be naturally cast as chance-constrained optimization problems, which however, are hard to solve precisely. We present a simulation-based approach that provides a piecewise approximation of a certain quantile function for the responses of interest. Using the piecewise approximations as objective functions, a collection of local optima are estimated, from which a global search based on simulated annealing is performed. The search yields tunable parameter values at which the performance requirements are satisfied with a high probability, despite variations in the stochastic parameters. Our approach is applied to three benchmarks: an insulin infusion pump model for patients with type-1 diabetes, a robust flight control problem for fixed-wing aircrafts, and an ODE-based apoptosis model from system biology.
2014
Iterative Computation of Polyhedral Invariants Sets for Polynomial Dynamical Systems
Mohamed Amin Ben Sassi,
Antoine Girard,
and Sriram Sankaranarayanan
In IEEE Conference on Decision and Control (CDC),
pp. 6348-6353,
2014.
This paper deals with the computation of positive polyhedral invariant sets for polynomial dynamical system. A positive invariant set is a subset of the state-space such that if the state of the system is initially in this set, then it remains inside the set for all future time instances. We present a procedure that constructs an invariant set, iteratively, starting from an initial polyhedron that forms a ââguessââ at the invariant. Our iterative procedure attempts to prove at each stage that the given polyhedron is a positive invariant. This is achieved by first setting up a non-linear program that encodes the invariance conditions for the vector field at each facet of the current polyhedron. This is relaxed to a linear program through the use of blossoming principle for polynomials. If the set fails to be invariant, then we attempt to use local sensitivity analysis using the primal-dual solutions of the linear program to push its faces outwards/inwards in a bid to make it invariant. Doing so, however, keeps the face normals of the iterates fixed for all steps.
In this paper, we generalize the process to allow us to vary the normal vectors as well as the offsets for the individual faces. Doing so, makes the procedure completely general but at the same time increases its complexity. Nevertheless, we demonstrate that the new approach allows our procedure to recover from a poor choice of templates initially to yield better invariants.
Statistically Sound Verification and Optimization for Complex Systems
Yan Zhang,
Sriram Sankaranarayanan,
and Fabio Somenzi
In Automated Techniques for Verification and Analysis (ATVA),
Lecture Notes in Computer Science
Vol. 8837,
pp. 411-427,
2014.
We discuss the verification and optimization of complex systems with respect to a set of specifications under stochastic parameter variations. We introduce a simulation-based statistically sound model inference approach. Our technique considers systems whose responses depend on a few design parameters and many stochastic parameters. The approach consists of two parts: verification and optimization. First, we verify whether the system satisfy the specifications for a given set of values of design parameters using regression and a generalization technique based on statistical model checking. If not, we seek new values of the design parameters for which statistical verification succeeds. The proposed approach involves repeated simulation of the system with fixed design parameters and variational stochastic parameters. Quantile regression is used to construct a relational model that is further generalized into a statistically sound model. The resulting model is used to search iteratively for a new design point. We evaluate this approach over several applications. In each case, the performance is improved significantly compared to the nominal design.
Determination of personalized diabetes treatment plans using a two-delay model
Stephen M. Kissler,
Cody Cichowitz,
Sriram Sankaranarayanan,
and David M. Bortz
J. Theoretical Biology,
359
(Oct)
pp. 101-111,
2014.
Diabetes cases worldwide have risen steadily over the past decades, lending urgency to the search for more efficient, effective, and personalized ways to treat the disease. Current treatment strategies, however, may fail to maintain ultradian oscillations in blood glucose concentration, an important element of a healthy alimentary system. Building upon recent successes in mathematical modeling of the human glucose-insulin system, we show that both food intake and insulin therapy likely demand increasingly precise control over insulin sensitivity if oscillations at a healthy average glucose concentration are to be maintained. We then suggest guidelines and personalized treatment options for diabetic patients that maintain these oscillations. We show that for a type II diabetic, both blood glucose levels can be controlled and healthy oscillations maintained when the patient gets an hour of daily exercise and is placed on a combination of Metformin and sulfonylurea drugs. We note that insulin therapy and an additional hour of exercise will reduce the patientâs need for sulfonylureas. Results of a modeling analysis suggest that a typical type I diabeticâs blood glucose levels can be properly controlled with a constant insulin infusion between 0.45 and 0.7 microU/(ml*min). Lastly, we note that all suggested strategies rely on existing clinical techniques and established treatment measures, and so could potentially be of immediate use in the design of an artificial pancreas.
Infinite Horizon Safety Controller Synthesis through Disjunctive Polyhedral Abstract Interpretation
Hadi Ravanbakhsh,
and Sriram Sankaranarayanan
In Intl. Conference on Embedded Software (EMSOFT),
pp. 15:1-15:10,
2014.
This paper presents a controller synthesis approach using disjunctive polyhedral abstract interpretation. Our approach synthesizes infinite time-horizon controllers for safety properties with discrete-time, linear plant model and a switching feedback controller that is suitable for time-triggered implementations. The core idea behind our approach is to perform an abstract interpretation over disjunctions of convex polyhedra to identify states that are potentially uncontrollable. Complementing this set yields the set of controllable states, starting from which, the safety property can be guaranteed by an appropriate control function. Since, a straightforward disjunctive domain is computationally inefficient, we present an abstract domain based on a state partitioning scheme that allows us to efficiently control the complexity of the intermediate representations. Next, we focus on the automatic generation of controller implementation from our final fixed point. We show that a balanced tree approach can yield efficient controller code with guarantees on the worst-case execution time of the controller. Finally, we evaluate our approach on a suite of benchmarks, comparing different instantiations with related synthesis tools. Our evaluation shows that our approach can successfully synthesize controller implementations for small to medium sized benchmarks.
Under-approximate Flowpipes for Non-linear Continuous Systems
Xin Chen,
Sriram Sankaranarayanan,
and Erika Abraham
In Formal Methods in Computer-Aided Design (FMCAD),
pp. 59-66,
2014.
We propose an approach for computing under- as well as over-approximations for the reachable sets of continuous systems which are defined by non-linear Ordinary Differential Equations (ODEs). Given a compact and connected initial set of states, described by polynomial inequalities, our approach computes under-approximations of the set of states reachable over time. Our approach is based on a simple yet elegant technique to obtain an accurate Taylor model over-approximation for a backward flowmap based on well-known techniques to over-approximate the forward map. Next, we show that this over-approximation can be used to yield both over- and under-approximations for the forward reachable sets. Based on the result, we are able to conclude ââmayââ as well as ââmustââ reachability to prove properties or conclude the existence of counterexamples. A prototype of the approach is implemented and its performance is evaluated over a considerable number of benchmarks.
Multiple-Shooting CEGAR-based falsification for hybrid systems
Aditya Zutshi,
Sriram Sankaranarayanan,
Jyotirmoy Deshmukh,
and James Kapinski
In Intl. Conference on Embedded Software (EMSOFT),
pp. 5:1 - 5:10,
2014.
In this paper, we present an approach for finding violations of safety properties of hybrid systems. Existing approaches search for complete system trajectories that begin from an initial state and reach some unsafe state. We present an approach that searches over segmented trajectories, consisting of a sequence of segments starting from any system state. Adjacent segments may have gaps, which our approach then seeks to narrow iteratively. We show that segmented trajectories are actually paths in the abstract state graph obtained by tiling the state space with cells. Instead of creating the prohibitively large abstract state graph explicitly, our approach implicitly performs a randomized search on it using a scatter-and-simulate technique. This involves repeated simulations, graph search to find likeliest abstract counterexamples, and iterative refinement of the abstract state graph. Finally, we demonstrate our technique on a number of case studies ranging from academic examples to models of industrial-scale control systems.
Scope bounded software verification in Varvel
Franjo Ivancic,
Gogul Balakrishnan,
Aarti Gupta,
Sriram Sankaranarayanan,
Naoto Maeda,
Takashi Imoto,
Rakesh Pothengil,
and Mustafa Hussain
Journal of Automated Software Engineering (J. ASE),
pp. 1-14,
2014.
Software model checking and static analysis have matured over the last decade, enabling their use in automated software verification. However, lack of scalability makes these tools hard to apply in industry practice. Furthermore, approximations in the models of program and environment lead to a profusion of false alarms. This paper proposes DC2, a verification framework using scope-bounding to address the issue of scalability, while retaining enough precision to avoid false alarms in practice. DC2 splits the analysis problem into manageable parts, relying on a combination of three automated techniques: (a) techniques to infer useful specifications for functions in the form of pre- and post-conditions; (b) stub inference techniques that infer abstractions to replace function calls beyond the verification scope; and (c) automatic refinement of pre- and post-conditions using counterexamples that are deemed to be false alarms by a user. The techniques enable DC2 to perform iterative reasoning over the calling environment of functions, to find non-trivial bugs and fewer false alarms.
Based on the DC2 framework, we have developed a software model checking tool for C/C++ programs called Varvel, which has been in industrial use at NEC for a number of years. In addition to DC2, we describe other scalability and usability improvements in Varvel that have enabled its successful application on numerous large software projects. These include model simplifications, support for witness understanding to improve debugging assistance, and handling of C programs. We present experimental evaluations that demonstrate the effectiveness of DC2 and report on the usage of Varvel in NEC.
Expectation Invariants as Fixed Points of Probabilistic Programs
Aleksandar Chakarov,
and Sriram Sankaranarayanan
In Static Analysis Symposium (SAS),
Lecture Notes in Computer Science
Vol. 8723,
pp. 85-100,
2014.
We present static analyses for probabilistic loops using expectation invariants. Probabilistic loops are imperative while-loops augmented with calls to random variable generators. Whereas, traditional program analysis uses Floyd-Hoare style invariants to over-approximate the set of reachable states, our approach synthesizes invariant inequalities involving the expected values of program expressions at the loop head. We first define the notion of expectation invariants, and demonstrate their usefulness in analyzing probabilistic program loops. Next, we present the set of expectation invariants for a loop as a fixed point of the pre-expectation operator over sets of program expressions. Finally, we use existing concepts from abstract interpretation theory to present an iterative analysis that synthesizes expectation invariants for probabilistic program loops. We show how the standard polyhedral abstract domain can be used to synthesize expectation invariants for probabilistic programs, and demonstrate the usefulness of our approach on some examples of probabilistic program loops.
QUICr: A Reusable Library for Parametric Abstraction of Sets and Numbers
Arlen Cox,
Bor-Yuh Evan Chang,
and Sriram Sankaranarayanan
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 8559,
pp. 866-873,
2014.
This paper introduces QUICr, an abstract domain combinator library that lifts any domain for numbers into an efficient domain for numbers and sets of numbers. As a library, it is useful for inferring relational data invariants in programs that manipulate data structures. As a testbed, it allows easy extension of widening and join heuristics, enabling adaptations to new and varied applications. In this paper we present the architecture of the library, guidelines on how to select heuristics, and an example instantiation of the library using the Apron library to verify set-manipulating programs.
In this paper, we focus on finding positive invariants and Lyapunov functions to establish reachability and stability properties, respectively, of polynomial ordinary differential equations (ODEs). In general, the search for such functions is a hard problem. As a result, numerous techniques have been developed to search for polynomial differential variants that yield positive invariants and polynomial Lyapunov functions that prove stability, for systems defined by polynomial differential equations. However, the systematic search for non-polynomial functions is considered a much harder problem, and has received much less attention.
In this paper, we combine ideas from computer algebra with the Sum-Of-Squares (SOS) relaxation for polynomial positive semi-definiteness to find non polynomial differential variants and Lyapunov functions for polynomial ODEs. Using the well-known concept of Darboux polynomials, we show how Darboux polynomials can, in many instances, naturally lead to specific forms of Lyapunov functions that involve rational function, logarithmic and exponential terms. We demonstrate the value of our approach by deriving non-polynomial Lyapunov functions for numerical examples drawn from the literature.
Abstract Acceleration of General Linear Loops
Bertrand Jeannet,
Peter Schrammel,
and Sriram Sankaranarayanan
In Principles of Programming Languages (POPL),
pp. 529-540,
2014.
We present abstract acceleration techniques for computing loop invariants for numerical programs with linear assignments and conditionals. Whereas abstract interpretation techniques typically over-approximate the set of reachable states iteratively, abstract acceleration captures the effect of the loop with a single, non-iterative transfer function applied to the initial states at the loop head. In contrast to previous acceleration techniques, our approach applies to any linear loop without restrictions. Its novelty lies in the use of the Jordan normal form decomposition of the loop body to derive symbolic expressions for the entries of the matrix modeling the effect of n>=0 iterations of the loop. The entries of such a matrix depend on n through complex polynomial, exponential and trigonometric functions. Therefore, we introduces an abstract domain for matrices that captures the linear inequality relations between these complex expressions. This results in an abstract matrix for describing the fixpoint semantics of the loop. Our approach integrates smoothly into standard abstract interpreters and can handle programs with nested loops and loops containing conditional branches. We evaluate it over small but complex loops that are commonly found in control software, comparing it with other tools for computing linear loop invariants. The loops in our benchmarks typically exhibit polynomial, exponential and oscillatory behaviors that present challenges to existing approaches. Our approach finds non-trivial invariants to prove useful bounds on the values of variables for such loops, clearly outperforming the existing approaches in terms of precision while exhibiting good performance.
Sparse statistical model inference for analog circuits under process variations
Yan Zhang,
Sriram Sankaranarayanan,
and Fabio Somenzi
In Asia and South Pacific Design Automation Conference (ASP-DAC),
pp. 449-454,
2014.
In this paper, we address the problem of performance modeling for transistor-level circuits under process variations. A sparse regression technique is introduced to characterize the relationship between the process parameters and the output responses. This approach relies on repeated simulations to find polynomial approximations of response surfaces. It employs a heuristic to construct sparse polynomial expansions and a stepwise regression algorithm based on LASSO to find low degree polynomial approximations. The proposed technique is able to handle many tens of process parameters with a small number of simulations when compared to an earlier approach using ordinary least squares. We present our approach in the context of statistical model inference (SMI), a recently proposed statistical verification framework for transistor-level circuits. Our experimental evaluation compares percentage yields predicted by our approach with Monte-Carlo simulations and SMI using ordinary least squares on benchmarks with up to 30 process parameters. The sparse-SMI approach is shown to require significantly fewer simulations, achieving orders of magnitude improvement in the run times with small differences in the resulting yield estimates.
2013
Probabilistic Temporal Logic Falsification of Cyber-Physical Systems
Houssam Abbas,
Georgios Fainekos,
Sriram Sankaranarayanan,
Franjo Ivancic,
and Aarti Gupta
ACM Transactions on Embedded Computing Systems (TECS),
12
(12s)
pp. 95,
2013.
We present a Monte-Carlo optimization technique for finding system behaviors that falsify a metric temporal logic (MTL) property. Our approach performs a random walk over the space of system inputs guided by a robustness metric defined by the MTL property. Robustness is guiding the search for a falsifying behavior by exploring trajectories with smaller robustness values. The resulting testing framework can be applied to a wide class of cyber-physical systems (CPS). We show through experiments on complex system models that using our framework can help automatically falsify properties with more consistency as compared to other means, such as uniform sampling.
Static analysis for concurrent programs with applications
to data race detection
Vineet Kahlon,
Sriram Sankaranarayanan,
and Aarti Gupta
We propose a general framework for static analysis of concurrent multi-threaded programs in the presence of various types of synchronization primitives such as locks and pairwise rendezvous. In order to capture interference between threads, we use the notion of a transaction, i.e., a sequence of statements in a thread that can be executed atomically, without sacrificing the soundness of the analysis while yielding precise results. These transactions are delineated automatically, and are captured in the form of a transaction graph over the global control state space of the program. Starting from a coarse transaction graph, constructed by exploiting scheduling constraints related to synchronizations and partial order reduction, we iteratively refine the graph by removing statically unreachable nodes using the results of various analyses. Specifically, we use abstract interpretation to automatically derive program invariants, based on abstract domains of increasing precision. Progressive refinement of the transaction graph enhances scalability of the static analyses, yielding more precise invariants. We demonstrate the benefits of this framework in an application to find data race bugs in concurrent programs, where our static analyses serve to reduce the number of false warnings captured by an initial lockset analysis. This framework also facilitates use of model checking on the remaining warnings to generate concrete error traces, where we leverage the preceding static analyses to generate small program slices and the derived invariants to improve performance. We describe our experimental results on a suite of Linux device drivers.
A trajectory splicing approach to concretizing counterexamples for hybrid systems
Aditya Zutshi,
Sriram Sankaranarayanan,
Jyotirmoy Deshmukh,
and James Kapinski
In IEEE Conference on Decision and Control (CDC),
pp. 3918-3925,
2013.
This paper examines techniques for finding falsifying trajectories of hybrid systems using an approach that we call trajectory splicing. Many formal verification techniques for hybrid systems, including flowpipe construction, can identify plausible abstract counterexamples for property violations. However, there is often a gap between the reported abstract counterexamples and the concrete system trajectories. Our approach starts with a candidate sequence of disconnected trajectory segments, each segment lying inside a discrete mode. However, such disconnected segments do not form concrete violations due to the gaps that exist between the ending state of one segment and the starting state of the subsequent segment. Therefore, trajectory splicing uses local optimization to minimize the gap between these segments, effectively splicing them together to form a concrete trajectory. We demonstrate the use of our approach for falsifying safety properties of hybrid systems using standard optimization techniques. As such, our approach is not restricted to linear systems. We compare our approach with other falsification approaches including uniform random sampling and a robustness guided falsification approach used in the tool S-Taliro. Our preliminary evaluation clearly shows the potential of our approach to search for candidate trajectory segments and use them to find concrete property violations.
From statistical model checking to statistical model inference:
characterizing the effect of process variations in analog
circuits
Yan Zhang,
Sriram Sankaranarayanan,
Fabio Somenzi,
Xin Chen,
and Erika Abraham
In International Conference on Computer-Aided
Design (ICCAD),
pp. 662-669,
2013.
This paper studies the effect of parameter variation on the behavior of analog circuits at the transistor (netlist) level. It is well known that variation in key circuit parameters can often adversely impact the correctness and performance of analog circuits during fabrication. An important problem lies in characterizing a safe subset of the parameter space for which the circuit can be guaranteed to satisfy the design specification. Due to the sheer size and complexity of analog circuits, a formal approach to the problem remains out of reach, especially at the transistor level. Therefore, we present a statistical model inference approach that exploits recent advances in statistical verification techniques. Our approach uses extensive circuit simulations to infer polynomials that approximate the behavior of a circuit. A procedure inspired by statistical model checking is then introduced to produce âstatistically soundâ models that extend the polynomial approximation. The resulting model can be viewed as a statistically guaranteed over-approximation of the circuit behavior. The proposed technique is demonstrated with two case studies in which it identifies subsets of parameters that satisfy the design specifications.
Lyapunov Function Synthesis Using Handelman Representations
Sriram Sankaranarayanan,
Xin Chen,
and Erika Abraham
In IFAC conference on Nonlinear Control Systems (NOLCOS),
pp. 576-581,
2013.
We investigate linear programming relaxations to synthesize Lyapunov functions that establish the stability of a given system over a bounded region. In particular, we attempt to discover functions that are more readily useful inside symbolic verification tools for proving the correctness of control systems. Our approach searches for a Lyapunov function, given a parametric form with unknown coefficients, by constructing a system of linear inequality constraints over the unknown parameters. We examine two complementary ideas for the linear programming relaxation, including interval evaluation of the polynomial form and âHandelman representationsâ for positive polynomials over polyhedral sets. Our approach is implemented as part of a branch-and-relax scheme for discovering Lyapunov functions. We evaluate our approach using a prototype implementation, comparing it with techniques based on Sum-of-Squares (SOS) programming. A comparison with SOSTOOLS is carried out over a set of benchmarks gathered from the related work. The evaluation suggests that our approach using Simplex is generally fast, and discovers Lyapunov functions that are simpler and easy to check. They are suitable for use inside symbolic formal verification tools for reasoning about continuous systems.
Regular Real Analysis
Swarat Chaudhuri,
Sriram Sankaranarayanan,
and Moshe Vardi
In IEEE Symposium on Logic in Computer Science (LICS),
pp. 509-518,
2013.
We initiate the study of regular real analysis, or the analysis of real functions that can be encoded by automata on infinite words. It is known that omega-automata can be used to represent relations between real vectors, reals being represented in exact precision as infinite streams. The regular functions studied here constitute the functional subset of such relations. We show that some classic questions in function analysis can become elegantly computable in the context of regular real analysis. Specifically, we present an automata-theoretic technique for reasoning about limit behaviors of regular functions, and obtain, using this method, a decision procedure to verify the continuity of a regular function. Several other decision procedures for regular functions - for finding roots, fixpoints, minima, etc. - are also presented. At the same time, we show that the class of regular functions is quite rich, and includes functions that are highly challenging to encode using traditional symbolic notation.
Probabilistic Program Analysis using Martingales
Aleksandar Chakarov,
and Sriram Sankaranarayanan
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 8044,
pp. 511-526,
2013.
We present techniques for the analysis of infinite state probabilistic programs to synthesize probabilistic invariants and prove almost-sure termination. Our analysis is based on the notion of (super) martingales from probability theory. First, we define the concept of (super) martingales for loops in probabilistic programs. Next, we present the use of concentration of measure inequalities to bound the values of martingales with high probability. This directly allows us to infer probabilistic bounds on assertions involving the program variables. Next, we present the notion of a super martingale ranking function (SMRF) to prove almost sure termination of probabilistic programs. Finally, we extend constraint-based techniques to synthesize martingales and super-martingale ranking functions for probabilistic programs. We present some applications of our approach to reason about invariance and termination of small but complex probabilistic programs.
Flow*: An Analyzer for Non-Linear Hybrid Systems
Xin Chen,
Erika Abraham,
and Sriram Sankaranarayanan
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 8044,
pp. 258-263,
2013.
The tool Flow* performs Taylor model-based flowpipe construction for non-linear (polynomial) hybrid systems. Flow* combines well-known Taylor model arithmetic techniques for guaranteed approximations of the continuous dynamics in each mode with a combination of approaches for handling mode invariants and discrete transitions. Flow* supports a wide variety of optimizations including adaptive step sizes, adaptive selection of approximation orders and the heuristic selection of template directions for aggregating flowpipes. This paper describes Flow* and demonstrates its performance on a series of non-linear continuous and hybrid system benchmarks. Our comparisons show that Flow* is competitive with other tools.
Static Analysis for Probabilistic Programs: Inferring Whole Program Properties from Finitely Many Paths
Sriram Sankaranarayanan,
Aleksandar Chakarov,
and Sumit Gulwani
In ACM conference on Programming Languages Design and Implementation (PLDI),
pp. 447-458,
2013.
We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a âcoverageâ bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.
Exploring the Internal State of User Interfaces by Combining Computer Vision Techniques with Grammatical Inference
Paul Givens,
Aleksandar Chakarov,
Sriram Sankaranarayanan,
and Tom Yeh
In Proc. International Conference on Software Engg. (NIER track),
pp. 1165-1168,
2013.
In this paper, we present a promising approach to systematically testing graphical user interfaces (GUI) in a platform independent manner. Our framework uses standard computer vision techniques through a python-based scripting language (Sikuli script) to identify key graphical elements in the screen and automatically interact with these elements by simulating keypresses and pointer clicks. The sequence of inputs and outputs resulting from the interaction is analyzed using grammatical inference techniques that can infer the likely internal states and transitions of the GUI based on the observations. Our framework handles a wide variety of user interfaces ranging from traditional pull down menus to interfaces built for mobile platforms such as Android and iOS. Furthermore, the automaton inferred by our approach can be used to check for potentially harmful patterns in the interfaceâs internal state machine such as design inconsistencies (eg,. a keypress does not have the intended effect) and mode confusion that can make the interface hard to use. We describe an implementation of the framework and demonstrate its working on a variety of interfaces including the user-interface of a safety critical insulin infusion pump that is commonly used by type-1 diabetic patients.
QUIC Graphs: Relational Invariant Generation for Containers
Arlen Cox,
Bor-Yuh Evan Chang,
and Sriram Sankaranarayanan
In European Colloquium on Object-Oriented Programming (ECOOP),
Lecture Notes in Computer Science
Vol. 7920,
pp. 401-425,
2013.
Programs written in modern languages perform intricate
manipulations of containers such as arrays, lists,
dictionaries, and sets. We present an abstract
interpretation-based framework for automatically
inferring relations between the set of values stored
in these containers. Relations include inclusion
relations over unions and intersections, as well as
quantified relationships with scalar variables. We
develop an abstract domain constructor that builds a
container domain out of a Quantified
Union-Intersection Constraint (QUIC) graph
parameterized by an arbitrary base domain. We
instantiate our domain with a polyhedral base domain
and evaluate it on programs extracted from the
Python test suite. Over traditional, non-relational
domains, we find significant precision improvements
with minimal performance cost.
2012
Taylor Model Flowpipe Construction for Non-linear Hybrid Systems
Xin Chen,
Erika Abraham,
and Sriram Sankaranarayanan
In Real Time Systems Symposium (RTSS),
pp. 183-192,
2012.
We propose an approach for verifying non-linear
hybrid systems using higher-order Taylor models that are a
combination of bounded degree polynomials over the initial
conditions and time, bloated by an interval. Taylor models are an
effective means for computing rigorous bounds on the complex
time trajectories of non-linear differential equations. As a result,
Taylor models have been successfully used to verify properties of
non-linear continuous systems. However, the handling of discrete
(controller) transitions remains a challenging problem.
In this paper, we provide techniques for handling the effect of
discrete transitions on Taylor model ïŹowpipe construction. We
explore various solutions based on two ideas: domain contraction
and range over-approximation. Instead of explicitly computing
the intersection of a Taylor model with a guard set, domain
contraction makes the domain of a Taylor model smaller by
cutting away parts for which the intersection is empty. It is
complemented by range over-approximation that translates Taylor
models into commonly used representations such as template
polyhedra or zonotopes, on which intersections with guard sets
have been previously studied. We provide an implementation of
the techniques described in the paper and evaluate the various
design choices over a set of challenging benchmarks.
A Bit too Precise? Bounded Verification of Quantized Digital Filters
Arlen Cox,
Sriram Sankaranarayanan,
and Bor-Yuh Evan Chang
In Tools and Algorithms for the Construction and Analysis of Systems (TACAS),
Lecture Notes in Computer Science
Vol. 7214,
pp. 33-42,
2012.
Digital filters are simple yet ubiquitous components of a wide variety of digital processing and control systems. Errors in the filters can be catastrophic. Traditionally digital filters have been verified using methods from control theory and extensive testing. We study two alternative verification techniques: bit-precise analysis and real-valued error approximations. In this paper, we empirically evaluate several variants of these two fundamental approaches for verifying fixed-point implementations of digital filters. We design our comparison to reveal the best possible approach towards verifying real-world designs of infinite impulse response (IIR) digital filters. Our study reveals broader insights into cases where bit-reasoning is absolutely necessary and suggests efficient approaches using modern satisfiability-modulo-theories (SMT) solvers.
Falsification of temporal properties of hybrid systems using
the cross-entropy method
Randomized testing is a popular approach for checking properties of large embedded system designs. It is well known that a uniform random choice of test inputs is often sub-optimal. Ideally, the choice of inputs has to be guided by choosing the right input distributions in order to expose corner-case violations. However, this is also known to be a hard problem, in practice. In this paper, we present an application of the cross-entropy method for adaptively choosing input distributions for falsifying temporal logic properties of hybrid systems. We present various choices for representing input distribution families for the cross-entropy method, ranging from a complete partitioning of the input space into cells to a factored distribution of the input using graphical models.
Finally, we experimentally compare the falsification approach using the cross-entropy method to other stochastic and heuristic optimization techniques implemented inside the tool S-Taliro over a set of benchmark systems. The performance of the cross entropy method is quite promising. We find that sampling inputs using the cross-entropy method guided by trace robustness can discover violations faster, and more consistently than the other competing methods considered.
Verification of Automotive Control Applications using S-Taliro
Georgios Fainekos,
Sriram Sankaranarayanan,
Koichi Ueda,
and Hakan Yazarel
In American Control Conference (ACC),
pp. 3567-3572,
2012.
S-TALIRO is a software toolbox that performs stochastic search for system trajectories that falsify realtime temporal logic specifications. S-TaLiRo is founded on the notion of robustness of temporal logic specifications. In this paper, we present a dynamic programming algorithm for computing the robustness of temporal logic specifications with respect to system trajectories. We also demonstrate that typical automotive functional requirements can be captured and falsified using temporal logics and S-TALIRO.
Timed Relational Abstractions for Sampled Data Control Systems
Aditya Zutshi,
Sriram Sankaranarayanan,
and Ashish Tiwari
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 7358,
pp. 343-361,
2012.
In this paper, we define timed relational abstractions for verifying sampled data control systems. Sampled data control systems consist of a plant, modeled as a hybrid system and a synchronous controller, modeled as a discrete transition system. The controller computes control inputs and/or sends control events to the plant based on the periodically sampled state of the plant. The correctness of the system depends on the controller design as well as an appropriate choice of the controller sampling period.
Our approach constructs a timed relational abstraction of the hybrid plant by replacing the continuous plant dynamics by relations. These relations map a state of the plant to states reachable within the sampling time period. We present techniques for building timed relational abstractions, while taking care of discrete transitions that can be taken by the plant between samples. The resulting abstractions are better suited for the verification of sampled data control systems. The abstractions focus on the states that can be observed by the controller at the sample times, while abstracting away behaviors between sample times conservatively. The resulting abstractions are discrete, infinite-state transition systems. Thus conventional verification tools can be used to verify safety properties of these abstractions. We use k-induction to prove safety properties and bounded model checking (BMC) to find potential falsifications. We present our idea, its implementation and results on many benchmark examples.
Simulating Insulin Infusion Pump Risks by In-Silico Modeling of the Insulin-Glucose Regulatory System
Sriram Sankaranarayanan,
and Georgios Fainekos
In Computational Methods in Systems Biology (CMSB),
Lecture Notes in Computer Science
Vol. 7605,
pp. 322-341,
2012.
We present a case study on the use of robustness-guided and statistical model checking approaches for simulating risks due to insulin infusion pump usage by diabetic patients. Insulin infusion pumps allow for a continuous delivery of insulin with varying rates and delivery profiles to help patients self-regulate their blood glucose levels. However, the use of infusion pumps and continuous glucose monitors can pose risks to the patient including chronically elevated blood glucose levels (hyperglycemia) or dangerously low glucose levels (hypoglycemia).
In this paper, we use mathematical models of the basic insulin-glucose regulatory system in a diabetic patient, insulin infusion pumps, and the userâs interaction with these pumps defined by commonly used insulin infusion strategies for maintaining normal glucose levels. These strategies include common guidelines taught to patients by physicians and certified diabetes educators and have been implemented in commercially available insulin bolus calculators. Furthermore, we model the failures in the devices themselves along with common errors in the usage of the pump. We compose these models together and analyze them using two related techniques: (a) robustness guided state-space search to explore worst-case scenarios and (b) statistical model checking techniques to assess the probabilities of hyper- and hypoglycemia risks. Our technique can be used to identify the worst-case effects of the combination of many different kinds of failures and place high confidence bounds on their probabilities.
On the revision problem of specification automata
Kangjin Kim,
Georgios E. Fainekos,
and Sriram Sankaranarayanan
One of the important challenges in robotics is the automatic synthesis of provably correct controllers from high level specifications. One class of such algorithms operates in two steps: (i) high level discrete controller synthesis and (ii) low level continuous controller synthesis. In this class of algorithms, when phase (i) fails, then it is desirable to provide feedback to the designer in the form of revised specifications that can be achieved by the system. In this paper, we address the minimal revision problem for specification automata. That is, we construct automata specifications that are as âcloseâ as possible to the initial user intent, by removing the minimum number of constraints from the specification that cannot be satisfied. We prove that the problem is computationally hard and we encode it as a satisfiability problem. Then, the minimal revision problem can be solved by utilizing efficient SAT solvers.
Invariant Generation for Parametrized Systems using Self-Reflection
Alejandro Sanchez,
Sriram Sankaranarayanan,
Cesar Sanchez,
and Bor-Yuh Evan Chang
In Static Analysis Symposium (SAS),
Lecture Notes in Computer Science
Vol. 7460,
pp. 146-163,
2012.
We examine the problem of inferring invariants for
parametrized systems. Parametrized systems are
concurrent systems consisting of an a priori
unbounded number of process instances running the
same program. Such systems are commonly encountered
in many situations including device drivers,
distributed systems, and robotic swarms. In this
paper we describe a technique that enables
leveraging off-the-shelf invariant generators
designed for sequential programs to infer invariants
of parametrized systems. The central challenge in
invariant inference for parametrized systems is that
nÀıvely exploding the transition system with all
interleavings is not just impractical but
impossible. In our approach, the key enabler is the
notion of a reïŹective abstraction that we prove has
an important correspondence with inductive
invariants. This correspondence naturally gives rise
to an iterative invariant generation procedure that
alternates between computing candidate invariants
and creating reïŹective abstractions.
Piecewise linear modeling of nonlinear devices for formal
verification of analog circuits
Yan Zhang,
Sriram Sankaranarayanan,
and Fabio Somenzi
In Formal Methods in Computer-Aided Design ( FMCAD 2012 ),
pp. 196-203,
2012.
We consider different piecewise linear (PWL) models for
nonlinear devices in the context of formal DC
operating point and transient analyses of analog
circuits. PWL models allow us to encode a
verification problem as constraints in linear
arithmetic, which can be solved efficiently using
modern SMT solvers. Numerous approaches to piecewise
linearization are possible, including piecewise
constant, simplicial piecewise linearization and
canonical piecewise linearization. We address the
question of which PWL modeling approach is the most
suitable for formal verification by experimentally
evaluating the performance of various PWL models in
terms of running time and accuracy for the DC
operating point and transient analyses of several
analog circuits. Our results are quite surprising:
piecewise constant (PWC) models, the simplest
approach, seem to be the most suitable in terms of
the trade-off between modeling precision and the
overall analysis time. Contrary to expectations,
more sophisticated device models do not necessarily
provide significant gains in accuracy, and may
result in increased running time. We also present
evidence suggesting that PWL models may not be
suitable for certain transient analyses.
Object Model Construction for Inheritance in C++ and Its Applications to Program Analysis
Jing Yang,
Gogul Balakrishnan,
Naoto Maeda,
Franjo Ivancic,
Aarti Gupta,
Nishant Sinha,
Sriram Sankaranarayanan,
and Naveen Sharma
In Compiler Construction (CC),
Lecture Notes in Computer Science
Vol. 7210,
pp. 144-164,
2012.
Modern object-oriented programming languages such as C++ provide convenient abstractions and data encapsulation mechanisms for software developers. However, these features also complicate testing and static analysis of programs that utilize object-oriented programming concepts. In particular, the C++ language exhibits features such as multiple inheritance, static and dynamic typecasting that make static analyzers for C++ quite hard to implement. In this paper, we present an approach where static analysis is performed by lowering the original C++ program into a semantically equivalent C program. However, unlike existing translation mechanisms that utilize complex pointer arithmetic operations, virtual-base offsets, virtual-function pointer tables, and calls to run-time libraries to model C++ features, our translation is targeted towards making static program analyzers for C easier to write and provide more precise results. We have implemented our ideas in a framework for C called CILpp that is analogous to the popular C Intermediate Language (CIL) framework. We evaluate the effectiveness of our translation in a bug finding tool that uses abstract interpretation and model checking. The bug finding tool uncovered several previously unknown bugs in C++ open source projects.
A model-based approach to synthesizing insulin infusion
pump usage parameters for diabetic patients
Sriram Sankaranarayanan,
Christopher Miller,
Rangarajan Raghunathan,
Hadi Ravanbakhsh,
and Georgios E. Fainekos
We present a model-based approach to synthesizing insulin infusion pump usage parameters against varying meal scenarios and physiological conditions. Insulin infusion pumps are commonly used by type-1 diabetic patients to control their blood glucose levels. The amounts of insulin to be infused are calculated based on parameters such as insulin-to-carbohydrate ratios and correction factors that need to be calibrated carefully for each patient. Frequent and careful calibration of these parameters is essential for avoiding complications such as hypoglycemia and hyperglycemia. In this paper, we propose to synthesize optimal parameters for meal bolus calculation starting from models of the patientâs insulin-glucose regulatory system and the infusion pump. Various off-the-shelf global optimization techniques are used to search for parameter values that minimize a penalty function defined over the predicted glucose sensor readings. The penalty function ârewardsâ glucose levels that lie within the prescribed ranges and âpenalizesâ the occurrence of hypoglycemia and hyperglycemia. We evaluate our approach using a model of the insulin-glucose regulatory system proposed by Dalla Man et al. Using this model, we compare various strategies for optimizing pump usage parameters for a virtual population of in-silico patients.
2011
Access Nets: Modeling Access to Physical Spaces
Robert Frohardt,
Bor-Yuh Evan Chang,
and Sriram Sankaranarayanan
In Verification, Model Checking, and Abstract Interpretation (VMCAI),
Lecture Notes in Computer Science
Vol. 6538,
pp. 184-198,
2011.
Electronic, software-managed mechanisms using, for example, radio-frequency identification (RFID) cards, enable great flexibility in specifying access control policies to physical spaces. For example, access rights may vary based on time of day or could differ in normal versus emergency situations. With such fine-grained control, understanding and reasoning about what a policy permits becomes surprisingly difficult requiring knowledge of permission levels, spatial layout, and time. In this paper, we present a formal modeling framework, called Access Nets, suitable for describing a combination of access permissions, physical spaces, and temporal constraints. Furthermore, we provide evidence that model checking techniques are effective in reasoning about physical access control policies. We describe our results from a tool that uses reachability analysis to validate security policies.
Model-Based Dependability Analysis of Programmable Drug
Infusion Pumps
Sriram Sankaranarayanan,
Hadjar Homaei,
and Clayton Lewis
In Formal Methods for Timed Systems (FORMATS),
Lecture Notes in Computer Science
Vol. 6919,
pp. 317-334,
2011.
Infusion pumps are commonly used in home/hospital
care to inject drugs into a patient at programmable
rates over time. However, in practice, a combination
of faults including software errors, mechanical
failures and human error can lead to catastrophic
situations, causing death or serious harm to the
patient. Dependability analysis techniques such as
failure mode effect analysis (FMEA) can be used to
predict the worst case outcomes of such faults and
facilitate the development of remedies against them.
In this paper, we present the use of model-checking
to automate the dependability analysis of
programmable, real-time medical devices. Our
approach uses timed and hybrid automata to model the
real-time operation of the medical device and its
interactions with the care giver and the
patient. Common failure modes arising from device
failures and human error are modeled in our
framework. Specifically, we use âmistake modelsâ
derived from human factor studies to model the
effects of mistakes committed by the operator. We
present a case-study involving an infusion pump used
to manage pain through the infusion of analgesic
drugs. The dynamics of analgesic drugs are modeled
by empirically validated pharmacokinetic
models. Using model checking, our technique can
systematically explore numerous combinations of
failures and characterize the worse case effects of
these failures.
DC2: A framework for scalable, scope-bounded software verification
Franjo Ivancic,
Gogul Balakrishnan,
Aarti Gupta,
Sriram Sankaranarayanan,
Naoto Maeda,
Hiroki Tokuoka,
Takashi Imoto,
and Yoshiaki Miyazaki
In Automated Software Engg. (ASE),
pp. 133-142,
2011.
Software model checking and static analysis have matured
over the last decade, enabling their use in
automated software verification. However, lack of
scalability makes these tools hard to
apply. Furthermore, approximations in the models of
program and environment lead to a profusion of false
alarms. This paper proposes DC2, a verification
framework using scope-bounding to bridge these
gaps. DC2 splits the analysis problem into
manageable parts, relying on a combination of three
automated techniques: (a) techniques to infer useful
specifications for functions in the form of pre- and
post-conditions; (b) stub inference techniques that
infer abstractions to replace function calls beyond
the verification scope; and (c) automatic refinement
of pre- and post-conditions from false alarms
identified by a user. DC2 enables iterative
reasoning over the calling environment, to help in
finding non-trivial bugs and fewer false alarms. We
present an experimental evaluation that demonstrates
the effectiveness of DC2 on several open-source and
industrial software projects.
S-TaLiRo: A Tool for Temporal Logic Falsification for Hybrid
Systems
Yashwanth Annpureddy,
Che Liu,
Georgios E. Fainekos,
and Sriram Sankaranarayanan
In Tools and Algorithms for Construction and Analysis of Systems (TACAS),
Lecture Notes in Computer Science
Vol. 6605,
pp. 254-257,
2011.
S-TaLiRo is a Matlab (TM) toolbox that searches for
trajectories of minimal robustness in
Simulink/Stateflow diagrams. It can analyze
arbitrary Simulink models or user defined functions
that model the system. At the heart of the tool, we
use randomized testing based on stochastic
optimization techniques including Monte-Carlo
methods and Ant-Colony Optimization. Among the
advantages of the toolbox is the seamless
integration inside the Matlab environment, which is
widely used in the industry for model-based
development of control software. We present the
architecture of S-TaLiRo and its working on an
application example.
Symbolic modular deadlock analysis
Jyotirmoy V. Deshmukh,
E. Allen Emerson,
and Sriram Sankaranarayanan
Methods in object-oriented concurrent libraries often
encapsulate internal synchronization details. As a
result of information hiding, clients calling the
library methods may cause thread safety violations
by invoking methods in an unsafe manner. This is
frequently a cause of deadlocks. Given a concurrent
library, we present a technique for inferring
interface contracts that specify permissible
concurrent method calls and patterns of aliasing
among method arguments. In this work, we focus on
deriving contracts that guarantee deadlock-free
execution for the methods in the library. The
contracts also help client developers by documenting
required assumptions about the library
methods. Alternatively, the contracts can be
statically enforced in the client code to detect
potential deadlocks in the client. Our technique
combines static analysis with a symbolic encoding
scheme for tracking lock dependencies, allowing us
to synthesize contracts using a SMT
solver. Additionally, we investigate extensions of
our technique to reason about deadlocks in libraries
that employ signaling primitives such as wait-notify
for cooperative synchronization. Our prototype tool
analyzes over a million lines of code for some
widely-used Java libraries within an hour, thus
demonstrating its scalability and
efficiency. Furthermore, the contracts inferred by
our approach have been able to pinpoint real
deadlocks in clients, i.e. deadlocks that have been
a part of bug-reports filed by users and developers
of client code.
The Flow-Insensitive Precision of Andersenâs Analysis in
Practice
Sam Blackshear,
Bor-Yuh Evan Chang,
Sriram Sankaranarayanan,
and Manu Sridharan
In Static Analysis Symposium (SAS),
Lecture Notes in Computer Science
Vol. 6887,
pp. 60-76,
2011.
We present techniques for determining the precision gap
between Andersenâs points-to analysis and precise
flow-insensitive points-to analysis in
practice. While previous work has shown that such a
gap may exist, no efficient algorithm for precise
flow-insensitive analysis is known, making
measurement of the gap on real-world programs
difficult. We give an algorithm for precise
flow-insensitive analysis of programs with finite
memory, based on a novel technique for refining any
points-to analysis with a search for
flow-insensitive witnesses. We give a compact
symbolic encoding of the technique that enables
computing the search using a tuned SAT solver. We
also present extensions of the algorithm that enable
computing lower and upper bounds on the precision
gap in the presence of dynamic memory allocation. In
our experimental evaluation over a suite of small-
to medium-sized C programs, we never observed a
precision gap between Andersenâs analysis and the
precise analysis. In other words, Andersenâs
analysis computed a precise flow-insensitive result
for all of our benchmarks. Hence, we conclude that
while better algorithms for the precise
flow-insensitive analysis are still of theoretical
interest, their practical impact for C programs is
likely to be negligible.
Relational Abstractions for Continuous and Hybrid Systems
Sriram Sankaranarayanan,
and Ashish Tiwari
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 6806,
pp. 686-702,
2011.
In this paper, we define relational abstractions of hybrid
systems. A relational abstraction is obtained by
replacing the continuous dynamics in each mode by a
binary transition relation that relates a state of
the system to any state that can potentially be
reached at some future time instant using the
continuous dynamics. We construct relational
abstractions by reusing template-based invariant
generation techniques for continuous systems
described by Ordinary Differential Equations
(ODE). As a result, we abstract a given hybrid
system as a purely discrete, infinite-state
system. We apply k-induction to this abstraction to
prove safety properties, and use bounded
model-checking to find potential falsifications. We
present the basic underpinnings of our approach and
demonstrate its use on many benchmark systems to
derive simple and usable abstractions.
Generalizing the Template Polyhedral Domain
Michael Colon,
and Sriram Sankaranarayanan
In European Symposium on Programming (ESOP),
Lecture Notes in Computer Science
Vol. 6602,
pp. 176-195,
2011.
Template polyhedra generalize weakly relational domains
by specifying arbitrary fixed linear expressions on
the left-hand sides of inequalities and undetermined
constants on the right. The domain operations
required for analysis over template polyhedra can be
computed in polynomial time using linear
programming. In this paper, we introduce the
generalized template polyhedral domain that extends
template polyhedra using fixed left-hand side
expressions with bilinear forms involving program
variables and unknown parameters to the right. We
prove that the domain operations over generalized
templates can be defined as the ââbest possible
abstractionsââ of the corresponding polyhedral
domain operations. The resulting analysis can
straddle the entire space of linear relation
analysis starting from the template domain to the
full polyhedral domain. We show that analysis in
the generalized template domain can be performed by
dualizing the join, post-condition and widening
operations. We also investigate the special case of
template polyhedra wherein each bilinear form has at
most two parameters. For this domain, we use the
special properties of two dimensional polyhedra and
techniques from fractional linear programming to
derive domain operations that can be implemented in
polynomial time over the number of variables in the
program and the size of the polyhedra. We present
applications of generalized template polyhedra to
strengthen previously obtained invariants by
converting them into templates. We describe an
experimental evaluation of an implementation over
several benchmark systems.
Combining Time and Frequency Domain Specifications for Periodic
Signals
Aleksandar Chakarov,
Sriram Sankaranarayanan,
and Georgios E. Fainekos
In Runtime Verification (RV),
Lecture Notes in Computer Science (LNCS)
Vol. 7186,
pp. 294-309,
2011.
In this paper, we investigate formalisms for specifying
periodic signals using time and frequency domain
specifications along with algorithms for the signal
recognition and generation problems for such
specifications. The time domain specifications are
in the form of hybrid automata whose continuous
state variables generate the desired signals. The
frequency domain specifications take the form of an
âenvelopeâ that constrains the possible power
spectra of the periodic signals with a given
frequency cutoff. The combination of time and
frequency domain specifications yields mixed-domain
specifications that constrain a signal to belong to
the intersection of the both specifications. We
show that the signal recognition problem for
periodic signals specified by hybrid automata is
NP-complete, while the corresponding problem for
frequency domain specifications can be approximated
to any desired degree by linear programs, which can
be solved in polynomial time. The signal generation
problem for time and frequency domain specifications
can be encoded into linear arithmetic constraints
that can be solved using existing SMT solvers. We
present some preliminary results based on an
implementation that uses the SMT solver Z3 to tackle
the signal generation problems.
Model Counting Using the Inclusion-Exclusion Principle
Huxley Bennett,
and Sriram Sankaranarayanan
In Theory and Applications of Satisfiability Testing - SAT
2011,
Lecture Notes in Computer Science
Vol. 6695,
pp. 362-363,
2011.
The inclusion-exclusion principle is a well-known mathematical principle used to count the number of elements in the union of a collection of sets in terms of intersections of sub-collections.We present an algorithm for counting the number of solutions of a given k-SAT formula using the inclusion-exclusion principle. The key contribution of our work consists of a novel subsumption pruning technique. Subsumption pruning exploits the alternating structure of the terms involved in the inclusion-exclusion principle to discover term cancellations that can account for the individual contributions of a large number of terms in a single step.
Automatic Abstraction of Non-Linear Systems Using Change of Variables Transformations
Sriram Sankaranarayanan
In Hybrid Systems: Computation and Control (HSCC),
pp. 143-152,
2011.
We present abstraction techniques that transform a given non-linear dynamical system into a linear system, such that, invariant properties of the resulting linear abstraction can be used to infer invariants for the original system. The abstraction techniques rely on a change of bases transformation that associates each state variable of the abstract system with a function involving the state variables of the original system. We present conditions under which a given change of basis transformation for a non-linear system can define an abstraction.
Furthermore, we present a technique to discover, given a non-linear system, if a change of bases transformation involving degree-bounded polynomials yielding a linear system abstraction exists. If so, our technique yields the resulting abstract linear system, as well. This approach is further extended to search for a change of bases transformation that abstracts a given non-linear system into a system of linear differential inclusions. Our techniques enable the use of analysis techniques for linear systems to infer invariants for non-linear systems. We present preliminary evidence of the practical feasibility of our ideas using a prototype implementation.
2010
Monte-carlo techniques for falsification of temporal properties
of non-linear hybrid systems
Truong Nghiem,
Sriram Sankaranarayanan,
Georgios E. Fainekos,
Franjo Ivancic,
Aarti Gupta,
and George J. Pappas
In Hybrid Systems: Computation and Control,
pp. 211-220,
2010.
We present a Monte-Carlo optimization technique for
finding inputs to a system that falsify a given
Metric Temporal Logic (MTL) property. Our approach
performs a random walk over the space of inputs
guided by a robustness metric defined by the MTL
property. Robustness can be used to guide our search
for a falsifying trajectory by exploring
trajectories with smaller robustness values. We show
that the notion of robustness can be generalized to
consider hybrid system trajectories. The resulting
testing framework can be applied to non-linear
hybrid systems with external inputs. We show through
numerous experiments on complex systems that using
our framework can help automatically falsify
properties with more consistency as compared to
other means such as uniform sampling.
Integrating ICP and LRA Solvers for Deciding Nonlinear Real
Arithmetic Problems
Sicun Gao,
Malay K. Ganai,
Franjo Ivancic,
Aarti Gupta,
Sriram Sankaranarayanan,
and Edmund M. Clarke
In Formal Methods for Computer-Aided Design (FMCAD),
pp. 81-89,
2010.
We propose a novel integration of interval constraint
propagation (ICP) with SMT solvers for linear real
arithmetic (LRA) to decide nonlinear real arithmetic
problems. We use ICP to search for interval
solutions of the nonlinear constraints, and use the
LRA solver to either validate the solutions or
provide constraints to incrementally refine the
search space for ICP. This serves the goal of
separating the linear and nonlinear solving stages,
and we show that the proposed methods preserve the
correctness guarantees of ICP. Experimental results
show that such separation is useful for enhancing
efficiency.
Automatic Invariant Generation for Hybrid Systems using Ideal Fixed Points
Sriram Sankaranarayanan
In Hybrid Systems: Computation and Control,
pp. 211-230,
2010.
We present computational techniques for automatically
generating algebraic (polynomial equality)
invariants for algebraic hybrid systems. Such
systems involve ordinary differential equations with
multivariate polynomial right-hand sides. Our
approach casts the problem of generating invariants
for differential equations as the greatest fixed
point of a monotone operator over the lattice of
ideals in a polynomial ring. We provide an algorithm
to compute this monotone operator using basic ideas
from commutative algebraic geometry. However, the
resulting iteration sequence does not always
converge to a fixed point, since the lattice of
ideals over a polynomial ring does not satisfy the
descending chain condition. We then present a
bounded-degree relaxation based on the concept of
ââpseudo idealsââ, due to Colon, that restricts
ideal membership using multipliers with bounded
degrees. We show that the monotone operator on
bounded degree pseudo ideals is convergent and
generates fixed points that can be used to generate
useful algebraic invariants for non-linear
systems. The technique for continuous systems is
then extended to consider hybrid systems with
multiple modes and discrete transitions between
modes. We have implemented the exact,
non-convergent iteration over ideals in combination
with the bounded degree iteration over pseudo ideals
to guarantee convergence. This has been applied to
automatically infer useful and interesting
polynomial invariants for some benchmark non-linear
systems
Numerical stability analysis of floating-point computations using software model checking
Franjo Ivancic,
Malay Ganai,
Sriram Sankaranarayanan,
and Aarti Gupta
In Formal Methods and Models for Codesign (MEMOCODE),
pp. 49-58,
2010.
Software model checking has recently been successful in
discovering bugs in production software. Most tools
have targeted heap related programming mistakes and
control-heavy programs. However, real-time and
embedded controllers implemented in software are
susceptible to computational numeric
instabilities. We target verification of numerical
programs that use floating-point types, to detect
loss of numerical precision incurred in such
programs. Techniques based on abstract
interpretation have been used in the past for such
analysis. We use bounded model checking (BMC) based
on Satisfiability Modulo Theory (SMT) solvers, which
work on a mixed integer-real model that we generate
for programs with floating points. We have
implemented these techniques in our software
verification platform. We report experimental
results on benchmark examples to study the
effectiveness of model checking on such problems,
and the effect of various model simplifications on
the performance of model checking.
Satisfiability Modulo Path Programs
William R. Harris,
Sriram Sankaranarayanan,
Franjo Ivancic,
and Aarti Gupta
In ACM Principles of Programming Languages (POPLâ10),
pp. 71-82,
2010.
Path-sensitivity is often a crucial requirement for
verifying safety properties of programs. As it is
infeasible to enumerate and analyze each path
individually, analyses compromise by soundly merging
information about executions along multiple
paths. However, this frequently results in a loss of
precision. We present a program analysis technique
that we call Satisfiability Modulo Path Programs
(SMPP), based on a path-based decomposition of a
program. It is inspired by insights that have driven
the development of modern SMT(Satisfiability Modulo
Theory) solvers. SMPP symbolically enumerates path
programs using a SAT formula over control edges in
the program. Each enumerated path program is
verified using an oracle, such as abstract
interpretation or symbolic execution, to either find
a proof of correctness or report a potential
violation. If a proof is found, then SMPP extracts a
sufficient set of control edges and corresponding
interference edges, as a form of proof-based
learning. Blocking clauses derived from these edges
are added back to the SAT formula to avoid
enumeration of other path programs guaranteed to be
correct, thereby improving performance and
scalability. We have applied SMPP in the F-Soft
program verification framework, to verify properties
of real-world C programs that require path-sensitive
reasoning. Our results indicate that the precision
from analyzing individual path programs, combined
with their efficient enumeration by SMPP, can prove
properties as well as indicate potential violations
in the large.
2009
Semantic Reduction of Thread Interleavings in Concurrent
Programs
Vineet Kahlon,
Sriram Sankaranarayanan,
and Aarti Gupta
In TACAS,
Lecture Notes in Computer Science
Vol. 5505,
pp. 124-138,
2009.
We propose a static analysis framework for concurrent
programs based on reduction of thread interleavings
using sound invariants on the top of partial order
techniques. Starting from a product graph that
represents transactions, we iteratively refine the
graph to remove statically unreachable nodes in the
product graph using the results of these
analyses. We use abstract interpretation to
automatically derive program invariants, based on
abstract domains of increasing precision. We
demonstrate the benefits of this framework in an
application to find data race bugs in concurrent
programs, where our static analyses serve to reduce
the number of false warnings captured by an initial
lockset analysis. This framework also facilitates
use of model checking on the remaining warnings to
generate concrete error traces, where we leverage
the preceding static analyses to generate small
program slices and the derived invariants to improve
scalability. We describe our experimental results on
a suite of Linux device drivers.
Generating and Analyzing Symbolic Traces of Simulink/Stateflow Models
Aditya Kanade,
Rajeev Alur,
Franjo Ivancic,
S. Ramesh,
Sriram Sankaranarayanan,
and K. C. Shashidhar
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 5643,
pp. 430-445,
2009.
We present a simple yet useful technique for refining the
control structure of loops that occur in imperative
programs. Loops containing complex control flow are
common in synchronous embedded controllers derived
from modelling languages such as Lustre, Esterel,
and Simulink/Stateflow. Our approach uses a set of
labels to distinguish different control paths inside
a given loop. The iterations of the loop are
abstracted as a finite state automaton over these
labels. We then use static analysis techniques to
identify infeasible iteration sequences. Such
forbidden sequences are subtracted from the initial
language to obtain a refinement. In practice, the
refinement of control flow sequences often
simplifies the control flow patterns in the loop,
identifying nested loops, unrolling and unwinding
automatically. We have applied the refinement
technique to improve the precision of abstract
interpretation in the presence of widening. Our
experiments on a set of complex reactive loop
benchmarks clearly show the utility of our
refinement techniques. Other potentially useful
applications include termination analysis and
reverse engineering models from source code.
Robustness of Model-Based Simulations
Georgios E. Fainekos,
Sriram Sankaranarayanan,
Franjo Ivancic,
and Aarti Gupta
In Real-Time Systems Symposium (RTSS),
pp. 345-354,
2009.
This paper proposes a framework for determining the
correctness and robustness of simulations of hybrid
systems. The focus is on simulations generated from
model-based design environments and, in particular,
Simulink. The correctness and robustness of the
simulation is guaranteed against floating-point
rounding errors and system modeling
uncertainties. Toward that goal, self-validated
arithmetics, such as interval and affine arithmetic,
are employed for guaranteed simulation of
discrete-time hybrid systems. In the case of
continuous-time hybrid systems, self-validated
arithmetics are utilized for over-approximations of
reachability computations.
Symbolic Deadlock Analysis in Concurrent Libraries and their Clients
Jyotirmoy Deshmukh,
E. Allen Emerson,
and Sriram Sankaranarayanan
In Automated Software Engg. (ASE),
pp. 480-491,
2009.
Methods in object-oriented concurrent libraries hide internal synchronization details. However, information hiding may result in clients causing thread safety violations by invoking methods in an unsafe manner. Given such a library, we present a technique for inferring interface contracts that specify permissible concurrent method calls and patterns of aliasing among method arguments, such that the derived contracts guarantee deadlock free execution for the methods in the library. The contracts also help client developers by documenting required assumptions about the library methods. Alternatively, the contracts can be statically enforced in the client code to detect potential deadlocks in the client. Our technique combines static analysis with a symbolic encoding for tracking lock dependencies, allowing us to synthesize contracts using a SMT solver. Our prototype tool analyzes over a million lines of code for some widely-used Java libraries within an hour, thus demonstrating its scalability and efficiency. Furthermore, the contracts inferred by our approach have been able to pinpoint real deadlocks in clients, i.e. deadlocks that have been a part of bug-reports filed by users and developers of the client code.
Inputs of Coma: Static Detection of Denial-of-Service Vulnerabilities
Richard Chang,
Guofei Jiang,
Franjo Ivancic,
Sriram Sankaranarayanan,
and Vitaly Shmatikov
In Computer Security Foundations (CSF),
pp. 186-199,
2009.
As networked systems grow in complexity, they are
increasingly vulnerable to denial-of-service (DoS)
attacks involving resource exhaustion. A single
malicious ââinput of comaââ can trigger
high-complexity behavior such as deep recursion in a
carelessly implemented server, exhausting CPU time
or stack space and making the server unavailable to
legitimate clients. These DoS attacks exploit the
semantics of the target application, are rarely
associated with network traffic anomalies, and are
thus extremely difficult to detect using
conventional methods. We present SAFER, a static
analysis tool for identifying potential DoS
vulnerabilities and the root causes of
resource-exhaustion attacks before the software is
deployed. Our tool combines taint analysis with
control dependency analysis to detect
high-complexity control structures whose execution
can be triggered by untrusted network inputs. When
evaluated on real-world networked applications,
SAFER discovered previously unknown DoS
vulnerabilities in the expat XML parser and the
SQLite library, as well as a new attack on a
previously patched version of the wu-ftpd
server. This demonstrates the importance of
understanding and repairing the root causes of DoS
vulnerabilities rather than simply blocking known
malicious inputs.
2008
SLR: Path-Sensitive Analysis through Infeasible-Path Detection
and Syntactic Language Refinement
Gogul Balakrishnan,
Sriram Sankaranarayanan,
Franjo Ivancic,
Ou Wei,
and Aarti Gupta
In SAS,
Lecture Notes in Computer Science
Vol. 5079,
pp. 238-254,
2008.
We present a technique for detecting semantically
infeasible paths in programs using abstract
interpretation. Our technique uses a sequence of
path-insensitive forward and backward runs of an
abstract interpreter to infer paths in the CFG that
cannot be exercised in concrete executions of the
program. We then present a syntactic language
refinement (SLR) technique that automatically
excludes semantically infeasible paths from the
program during the static analysis. This allows us
to iteratively prove more properties. Specifically,
our technique simulates the effect of a
path-sensitive analysis by performing syntactic
language refinement over an underlying
path-insensitive static analyzer. Finally, we
present experimental results to quantify the impact
of our technique on an abstract interpreter for C
programs.
Symbolic Model Checking of Hybrid Systems Using Template
Polyhedra
Sriram Sankaranarayanan,
Thao Dang,
and Franjo Ivancic
In TACAS,
Lecture Notes in Computer Science
Vol. 4963,
pp. 188-202,
2008.
We propose techniques for the verification of hybrid
systems using template polyhedra, i.e., polyhedra
whose inequalities have fixed expressions but with
varying constant terms. Given a hybrid system
description and a set of template linear expressions
as inputs, our technique constructs
over-approximations of the reachable states using
template polyhedra. The advantages of using template
polyhedra are that operations used in symbolic model
checking such as intersection, union and
post-condition across discrete transitions over
template polyhedra can be computed efficiently
without using expensive vertex enumeration.
Additionally, the verification of hybrid systems
requires techniques to handle the continous dynamics
inside locations. A key contribution of the paper is
a new flowpipe construction algorithm using template
polyhedra. Our technique uses higher-order Taylor
series approximations along with repeated
optimization problems to bound the terms in the
Taylor series expansion. The location invariant is
used to enclose the remainder term of the Taylor
series, and thus make our technique sound. As a
result, we can compute flowpipe segments using
template polyhedra that leads to a precise treatment
of the continuous dynamics. Finally, we have
implemented our approach as a part of the tool
TimePass for verifying reachability properties of
affine hybrid automata.
A Policy Iteration Technique for Time Elapse over Template
Polyhedra
Sriram Sankaranarayanan,
Thao Dang,
and Franjo Ivancic
In HSCC,
Lecture Notes in Computer Science
Vol. 4981,
pp. 654-657,
2008.
In this paper, we present a policy iteration technique
that computes an over-approximation of the time
trajectories of a system using template
polyhedra. Such polyhedra are obtained by conjoining
a set of inequality templates with varying constant
coefficients. Given a set of template expressions,
we show the existence of a smallest template
polyhedron that is a positive invariant w.r.t to the
dynamics of the continuous variables, and hence, an
over-approximation the time trajectories. Thus, we
derive a time elapse operator for template polyhedra
using policy iteration that computes tight
over-approximations of the time trajectories. We
exploit the result of the policy iteration to
improve the precision of Taylor series-based
flowpipe construction. Finally, we incorporate our
ideas inside a prototype tool for safety
verification of affine hybrid systems, with
promising results on benchmarks.
Constructing Invariants for Hybrid Systems
Sriram Sankaranarayanan,
Henny Sipma,
and Zohar Manna
Formal Methods in System Design,
32
(1)
pp. 25-55,
2008.
An invariant of a system is a predicate that holds for every
reachable state. In this paper, we present
techniques to generate invariants for hybrid
systems. This is achieved by reducing the invariant
generation problem to a constraint solving problem
using methods from the theory of ideals over
polynomial rings. We extend our previous work on the
generation of algebraic invariants for discrete
transition systems in order to generate algebraic
invariants for hybrid systems. In doing so, we
present a new technique to handle consecution across
continuous differential equations. The techniques we
present allow a trade-off between the complexity of
the invariant generation process and the strength of
the resulting invariants.
Dynamic inference of likely data preconditions over predicates
by tree learning
Sriram Sankaranarayanan,
Swarat Chaudhuri,
Franjo Ivancic,
and Aarti Gupta
In Intl. Symposium on Software Testing and Analysis (ISSTA),
pp. 295-306,
2008.
We present a technique to learn data preconditions for
procedures written in an imperative programming
language. Given a procedure and an set of predicates
over its inputs, our technique enumerates different
truth assignments to the predicates, deriving test
cases from each feasible truth assignment. The
predicates themselves are provided by the user
and/or automatically produced from the program
description using heuristics. The enumeration of
truth assignments is performed by using randomized
SAT solvers with a theory satisfiability checker
capable of generating unsatisfiable cores. For each
combination of truth values chosen by our sampler,
the corresponding set of test cases are generated
and exectued. Based on the result of the execution,
the truth combination is classified as safe or
buggy. Finally, a decision tree classifier is used
to generate a boolean formula over the input
predicates that explains the truth table generated
in this process. The resulting boolean formula forms
a precondition for the function under test. We
apply our techniques on a wide variety of functions,
including functions in the standard C library. Our
experiments show that the proposed learning
technique is quite robust. In many cases, it
successfully learns a precondition that captures a
safe and permissive calling environment required for
the execution of the function.
Mining library specifications using inductive logic programming
Sriram Sankaranarayanan,
Franjo Ivancic,
and Aarti Gupta
In Intl. Symp. on Software Engg. (ICSE),
pp. 131-140,
2008.
Software libraries are ubiquitous for organizing widely
useful functionalities in order to promote
modularity and code reuse. A typical software
library is used by client programs through an API
that hides library internals from the
client. Typically, the rules governing the correct
usage of the API are documented informally. As a
result, the behaviour of the library under some
corner cases may not be well understood by the
programmer. We propose a methodology for learning
interface specifications using inductive logic
programming (ILP). Our technique runs several unit
tests on the library in order to generate relations
describing the operation of the library. The data
collected from these tests are used by an inductive
learner to obtain rich Prolog specifications. Such
specifications capture essential properties of the
library. They may be used for applications such as
reverse engineering, and also to construct checks on
that enforce proper API usage.
2007
Program Analysis Using Symbolic Ranges
Sriram Sankaranarayanan,
Franjo Ivancic,
and Aarti Gupta
In Static Analysis Symposium,
Lecture Notes in Computer Science
Vol. 4634,
pp. 366-383,
2007.
Interval analysis seeks static lower and upper bounds on
the values of program variables. These bounds are
useful, especially for inferring invariants to prove
buffer overflow checks. In practice, however,
intervals by themselves are inadequate as invariants
due to the lack of relational information among the
variables. In this paper, we present a technique
for deriving symbolic bounds on the variable
values. We study a restricted class of polyhedra
whose constraints are stratified with respect to
some variable ordering provided by the user, or
chosen heuristically. We define a notion of
normalization for such constraints and demonstrate
polynomial time domain operations on the resulting
domain of symbolic range constraints. The abstract
domain is intended to complement widely used domains
such as intervals and octagons for use in buffer
overflow analysis. Finally, we study the impact of
our analysis on commercial software using an
overflow analyzer for the C language.
Fast and Accurate Static Data-Race Detection for Concurrent
Programs
Vineet Kahlon,
Yu Yang,
Sriram Sankaranarayanan,
and Aarti Gupta
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 4590,
pp. 226-239,
2007.
We present new techniques for fast, accurate and scalable
static race detection in concurrent
programs. Focusing our analysis on Linux device
drivers allowed us to identify the unique challenges
posed by debugging large-scale real-life code and
also pinpointed drawbacks in existing race warning
generation methods. This motivated the development
of new techniques that helped us in improving both
the scalability as well as the accuracy of each of
the three main steps in a race warning generation
system. The first and most crucial step in data race
detection is automatic shared variable
discovery. Towards that end, we present a new,
efficient dataflow algorithm for shared variable
detection which is more effective than existing
correlation-based techniques that failed to detect
shared variables responsible for data races in
majority of the drivers in our benchmark
suite. Secondly, accuracy of race warning generation
strongly hinges on the accuracy of the alias
analysis used to compute aliases for lock
pointers. We formulate a new scalable context
sensitive alias analysis that effectively combines a
divide and conquer strategy with function
summarization and is demonstrably more efficient
than existing BDD-based techniques. Finally, we
provide a new warning reduction technique that
leverages lock acquisition patterns to yield
provably better warning reduction than existing
lockset based methods.
Controller Synthesis of Discrete Linear Plants Using Polyhedra
Matteo Slanina,
Sriram Sankaranarayanan,
Henny B. Sipma,
and Zohar Manna
REACT Technical Report (Stanford University),
1
(1)
pp. 1-14,
2007.
We study techniques for synthesizing synchronous controllers
for affine plants with disturbances, based on safety
specifications. Our plants are modeled in terms of
discrete linear systems whose variables are
partitioned into system, control, and disturbance
variables. We synthesize non-blocking controllers
that satisfy a user-provided safety specification by
means of a fixed point iteration over the control
precondition state transformer. Using convex
polyhedra to represent sets of states, we present
both precise and approximate algorithms for
computing control precon- ditions and discuss
strategies for forcing convergence of the
iteration. We present technique for automatically
deriving controllers from the result of the
analysis, and demonstrate our approach on
examples.
State space exploration using feedback constraint generation
and Monte-Carlo sampling
Sriram Sankaranarayanan,
Richard M. Chang,
Guofei Jiang,
and Franjo Ivancic
The systematic exploration of the space of all the
behaviours of a software system forms the basis of
numerous approaches to verification. However,
existing approaches face many challenges with
scalability and precision. We propose a framework
for validating programs based on statistical
sampling of inputs guided by statically generated
constraints, that steer the simulations towards more
ââdesirableââ traces. Our approach works
iteratively: each iteration first simulates the
system on some inputs sampled from a restricted
space, while recording facts about the simulated
traces. Subsequent iterations of the process attempt
to steer the future simulations away from what has
already been seen in the past iterations. This is
achieved by two separate means: (a) we perform
symbolic executions in order to guide the choice of
inputs, and (b) we sample from the input space using
a probability distribution specified by means of
previously observed test data using a Markov Chain
Monte-Carlo (MCMC) technique. As a result, the
sampled inputs generate traces that are likely to be
significantly different from the observations in the
previous iterations in some user specified ways. We
demonstrate that our approach is effective. It can
rapidly isolate rare behaviours of systems that
reveal more bugs and improve coverage.
2006
Efficient Strongly Relational Polyhedral Analysis
Sriram Sankaranarayanan,
Michael Colon,
Henny B. Sipma,
and Zphar Manna
In VMCAI,
Lecture Notes in Computer Science
Vol. 3855,
pp. 111-125,
2006.
Polyhedral analysis infers invariant linear equalities and
inequalities of imperative programs. However, the
exponential complexity of polyhedral operations such
as image computation and convex hull limits the
applicability of polyhedral analysis. Weakly
relational domains such as intervals and octagons
address the scalability issue by considering
polyhedra whose constraints are drawn from a
restricted, user-specified class. On the other hand,
these domains rely solely on candidate expressions
provided by the user. Therefore, they often fail to
produce strong invariants. We propose a polynomial
time approach to strongly relational analysis. We
provide efficient implementations of join and post
condition operations, achieving a trade off between
performance and accuracy. We have implemented a
strongly relational polyhedral analyzer for a subset
of the C language. Initial experimental results on
benchmark examples are encouraging.
Static Analysis in Disjunctive Numerical Domains
Sriram Sankaranarayanan,
Franjo Ivancic,
Ilya Shlyakhter,
and Aarti Gupta
In Static Analysis Symposium,
Lecture Notes in Computer Science
Vol. 4134,
pp. 3-17,
2006.
The convexity of numerical domains such as polyhedra, oc
tagons, intervals and linear equalities enables
tractable analysis of soft ware for buffer
overflows, null pointer dereferences and floating
point errors. However, convexity also causes the
analysis to fail in many common cases. Powerset
extensions can remedy this shortcoming by
considering disjunctions of
predicates. Unfortunately, analysis using powerset
domains can be exponentially more expensive as
compared to analysis on the base domain. In this
paper, we prove structural properties of fixed
points computed in commonly used powerset
extensions. We show that a fixed point computed on a
powerset extension is also a fixed point in the base
domain computed on an ââelaborationââ of the
programâs CFG structure. Using this insight, we
build analysis algorithms that approach path
sensitive static analysis algorithms by performing
the fixed point computation on the base domain while
discovering an ââelaborationââ on the fly. Using
restrictions on the nature of the elaborations, we
design algorithms that scale polynomially in terms
of the number of disjuncts. We have implemented a
lighteight static analyzer for C programs with
encouraging initial results.
Fixed Point Iteration for Computing the Time Elapse Operator
Sriram Sankaranarayanan,
Henny Sipma,
and Zohar Manna
In HSCC,
Lecture Notes in Computer Science
Vol. 3927,
pp. 537-551,
2006.
We investigate techniques for automatically generating symbolic
approximations to the time solution of a system of
differential equations. This is an important
primitive operation for the safety analysis of
continuous and hybrid systems. In this paper we
design a time elapse operator that computes a
symbolic over-approximation of time solutions to a
continous system starting from a given inital
region. Our approach is iterative over the cone of
functions (drawn from a suitable universe) that are
non negative over the initial region. At each stage,
we iteratively remove functions from the cone whose
Lie derivatives do not lie inside the current
iterate. If the iteration converges, the set of
states defined by the final iterate is shown to
contain all the time successors of the initial
region. The convergence of the iteration can be
forced using abstract interpretation operations such
as widening and narrowing. We instantiate our
technique to linear hybrid systems with
piecewise-affine dynamics to compute polyhedral
approximations to the time successors. Using our
prototype implementation TimePass, we demonstrate
the performance of our technique on benchmark
examples.
2005
LOLA: Runtime Monitoring of Synchronous Systems
Ben D. Angelo,
Sriram Sankaranarayanan,
Cesar Sanchez,
and Many Others
We present a specification language and algorithms for the
online and offline monitoring of synchronous systems
including circuits and embedded systems. Such
monitoring is useful not only for testing, but also
under actual deployment. The specification language
is simple and expressive; it can describe both
correctness/failure assertions along with
interesting statistical measures that are useful for
system profiling and coverage analysis. The
algorithm for online monitoring of queries in this
language follows a partial evaluation strategy: it
incrementally constructs output streams from input
streams, while maintaining a store of partially
evaluated expressions for forward references. We
identify a class of specifications, characterized
syntactically, for which the algorithmâs memory
requirement is independent of the length of the
input streams. Being able to bound memory
requirements is especially important in online
monitoring of large input streams. We extend the
concepts used in the online algorithm to construct
an efficient offline monitoring algorithm for large
traces. We have implemented our algorithm and
applied it to two industrial systems, the PCI bus
protocol and a memory controller. The results
demonstrate that our algorithms are practical and
that our specification language is sufficiently
expressive to handle specifications of interest to
industry.
Collecting Statistics over Runtime Executions
Bernd Finkbeiner,
Sriram Sankaranarayanan,
and Henny Sipma
Formal Methods In System Design,
27
(3)
pp. 253-274,
2005.
We present an extension to linear-time temporal logic (LTL)
that combines the temporal specification with the
collection of statistical data. By collecting
statistics over runtime executions of a program we
can answer complex queries, such as ââwhat is the
average number of packet transmissionsââ in a
communication protocol, or ââhow often does a
particular process enter the critical section while
another process remains waitingââ in a mutual
exclusion algorithm. To decouple the evaluation
strategy of the queries from the definition of the
temporal operators, we introduce algebraic
alternating automata as an automata-based
intermediate representation. Algebraic alternating
automata are an extension of alternating automata
that produce a value instead of acceptance or
rejection for each trace. Based on the translation
of the formulas from the query language to algebraic
alternating automata, we obtain a simple and
efficient query evaluation algorithm. The approach
is illustrated with examples and experimental
results.
Scalable Analysis of Linear Systems using Mathematical Programming
Sriram Sankaranarayanan,
Henny B. Sipma,
and Zohar Manna
In Verification, Model-Checking and Abstract-Interpretation (VMCAI 2005),
Lecture Notes in Computer Science
Vol. 3385,
pp. 25-41,
2005.
We present a method for generating linear invariants
for large systems. The method performs forward
propagation in an abstract domain consisting of
arbitrary polyhedra of a predefined fixed shape. The
basic operations on the domain like abstraction,
intersection, join and inclusion tests are all posed
as linear optimization queries, which can be solved
efficiently by existing LP solvers. The number and
dimensionality of the LP queries are polynomial in
the program dimensionality, size and the number of
target invariants. The method generalizes similar
analyses in the interval, octagon, and octahedra
domains, without resorting to polyhedral
manipulations. We demonstrate the performance of our
method on some benchmark programs.
2004
Non-linear Loop Invariant Generation using Groebner Bases
Sriram Sankaranarayanan,
Henny Sipma,
and Zohar Manna
In ACM Principles of Programming Languages (POPL),
pp. 318-330,
2004.
We present a new technique for the generation of
non-linear (algebraic) invariants of a program. Our
technique uses the theory of ideals over the real
and complex numbers to reduce the non-linear
invariant generation problem to a numerical
constraint solving problem. So far, the literature
on invariant generation has concentrated almost
exclusively on the construction of linear invariants
of linear programs. There has been very little
progress toward non-linear invariant generation. In
this paper, we demonstrate a technique that encodes
the conditions for a given template assertion being
an invariant into a set of constraints, such that
all the solutions to these constraints correspond to
non-linear (algebraic) loop invariants of the
program. We discuss some trade-offs between the
completeness of the technique and the tractability
of the constraint-solving problem generated. The
application of the technique is demonstrated on a
few examples.
Constraint-based Linear-Relations Analysis
Sriram Sankaranarayanan,
Henny B. Sipma,
and Zohar Manna
In Static Analysis Symposium (SAS 2004),
Lecture Notes in Computer Science
Vol. 3148,
pp. 53-69,
2004.
Linear-relations analysis of transition systems
discovers linear invariant relationships among the
variables of the system. These relationships help
establish important safety and liveness
properties. Efficient techniques for the analysis of
systems using polyhedra have been explored, leading
to the development of successful tools like
HyTech. However, existing techniques rely on the use
of approximations such as widening and extrapolation
in order to ensure termination. In an earlier paper,
we demonstrated the use of Farkas Lemma to provide a
translation from the linear-relations analysis
problem into a system of constraints on the unknown
coefficients of a candidate invariant. However,
since the constraints in question are non-linear, a
naive application of the method does not scale. In
this paper, we show that by some efficient
simplifications and approximations to the quantifier
elimination procedure, not only does the method
scale to higher dimensions, but also enjoys
performance advantages for some larger examples
Constructing Invariants for Hybrid Systems
Sriram Sankaranarayanan,
Henny B. Sipma,
and Zohar Manna
In Hybrid Systems: Computation and Control (HSCC 2004),
Lecture Notes in Computer Science
Vol. 2993,
pp. 539-555,
2004.
An invariant of a system is a predicate that holds for
every reachable state. In this paper, we present
techniques to generate invariants for hybrid systems
by reducing the invariant generation problem to a
constraint solving problem, using methods from the
theory of ideals over polynomial rings. We extend
our previous work on the generation of algebraic
invariants for discrete transition systems in order
to generate algebraic invariants for hybrid
systems. In doing so, we present a new technique to
handle consecution across continuous differential
equations. The techniques we present allow a
trade-off between the complexity of the invariant
generation process and the strength of the resulting
invariants.
2003
Event Correlation: Language and Semantics
Cesar Sanchez,
Sriram Sankaranarayanan,
Henny Sipma,
Ting Zhang,
and David Dill
In EMSOFT,
Lecture Notes in Computer Science
Vol. 2855,
pp. 323-339,
2003.
Event correlation is a service provided by middleware
platforms that allows components in a
publish/subscribe architecture to subscribe to
patterns of events rather than individual
events. Event correlation improves the scalability
and performance of distributed systems, increases
their analyzability, while reducing their complexity
by moving functionality to the middleware. To ensure
that event correlation is provided as a standard and
reliable service, it must possess well-defined and
unambiguous semantics. In this paper we present a
language and formal model for event correlation with
operational semantics defined in terms of
transducers. The language has been motivated by an
avionics application and includes constructs for
modes in addition to the more common constructs such
as selection, accumulation and sequential
composition. Prototype event processing engines for
this language have been implemented in both C
Petri Net Analysis using Invariant Generation
Sriram Sankaranarayanan,
Henny B. Sipma,
and Zohar Manna
In Verification: Theory and Practice,
Lecture Notes in Computer Science
Vol. 2772,
pp. 682-701,
2003.
Petri nets have been widely used to model and analyze
concurrent systems. Their wide-spread use in this
domain is, on one hand, facilitated by their
simplicity and expressiveness. On the other hand,
the analysis of Petri nets for questions like
reachability, boundedness and deadlock freedom can
be surprisingly hard. In this paper, we model Petri
nets as transition systems. We exploit the special
structure in these transition systems to provide an
exact and closed-form characterization of all its
inductive linear invariants. We then exploit this
characterization to provide an invariant generation
technique that we demonstrate to be efficient and
powerful in practice. We compare our work with those
in the literature and discuss extensions.
Linear Invariant Generation using Non-linear Constraint Solving
Michael Colon,
Sriram Sankaranarayanan,
and Henny Sipma
In Computer-Aided Verification (CAV),
Lecture Notes in Computer Science
Vol. 2725,
pp. 420-433,
2003.
We present a new method for the generation of linear
invariants which reduces the problem to a non-linear
constraint solving problem. Our method, based on
Farkasâ Lemma, synthesizes linear invariants by
extracting non-linear constraints on the
coefficients of a target invariant from a
program. These constraints guarantee that the linear
invariant is inductive. We then apply existing
techniques, including specialized quantifier
elimination methods over the reals, to solve these
non-linear constraints. Our method has the advantage
of being complete for inductive invariants. To our
knowledge, this is the first sound and complete
technique for generating inductive invariants of
this form. We illustrate the practicality of our
method on several examples, including cases in which
traditional methods based on abstract interpretation
with widening fail to generate sufficiently strong
invariants.
2002
Collecting Statistics over Runtime Executions
Bernd Finkbeiner,
Sriram Sankaranarayanan,
and Henny Sipma
In Runtime Verification (RV 2002),
Elec. Notes Theor. Comp. Sci
Vol. 70,
pp. 36-54,
2002.
By collecting statistics over runtime executions of a
program we can answer complex queries, such as
ââwhat is the average number of packet
retransmissionsââ in a communication protocol, or
ââhow often does process P1 enter the critical
section while process P2 waitsââ in a mutual
exclusion algorithm. We present an extension to
linear-time temporal logic that combines the
temporal specification with the collection of
statistical data. By translating formulas of this
language to alternating automata we obtain a simple
and efficient query evaluation algorithm. We
illustrate our approach with examples and
experimental results.
2001
Min-max Computation Tree Logic
Pallab Dasgupta,
Partha P. Chakrabarti,
Jatinder Deka,
and Sriram Sankaranarayanan
Artificial Intelligence,
127
(1)
pp. 137-162,
2001.
This paper introduces a branching time temporal query
language called Min-max CTL which is similar in
syntax to the popular temporal logic, CTL. However
unlike CTL, Min-max CTL can express timing queries
on a timed model. We show that interesting timing
queries involving a combination of min and max can
be expressed in Min-max CTL. While model checking
using most timed temporal logics is PSPACE complete
or harder, we show that many practical timing
queries, where we are interested in the worst case
or best case timings, can be answered in polynomial
time by querying the system using Min-max CTL.