Workshop on Stochastic Planning & Control of Dynamical Systems
Workshop Updates:
Schedule finalized!
Our speakers have been finalized! Looking forward to seeing you in Rio!
Grad Student Lightning Rounds Sign Up Is Now Live!
Are you a Grad student? Sign up for the Grad Student Lightning Rounds! Do take a look at Motivations and Objectives to learn more about the workshop's focus.
Application closes September 5th AoE. Lightning round participants announced September 10th AoE.
Website Launch
Welcome to the official website for the Workshop on Stochastic Planning & Control of Dynamical Systems. Please check back for updates on speakers, schedule, and registration.
Motivation and Objectives
Recent advances in stochastic control theory have opened new avenues for addressing uncertainty in complex dynamical systems. This workshop brings together leading, early, and student researchers in stochastic control, uncertainty quantification, and optimization to explore cutting-edge methodologies for planning and controlling systems under uncertainty. The intersection of these disciplines offers a fertile ground for developing practical algorithms that can handle the challenges of real-world applications, particularly in the domains of aerospace and autonomous systems. This full-day workshop will expose researchers to the broad and varied ideas of planning and control under (stochastic) uncertainty, with a suite of different approaches and methodologies to enable safe and robust control and trajectory optimization. It will enable interaction between researchers working on different aspects of stochastic control, with the aims of:
- Developing a deeper understanding of the fundamental ties between these related research topics.
- Leveraging this understanding to design optimal control algorithms able to handle uncertainty in nonlinear, potentially non-stationary, and stochastic environments.
The workshop will focus around the following four key research areas:
Stochastic Model Predictive and Data Driven Control
Such approaches comprise methods that explicitly incorporate constraints and the risk management planning into trajectory optimization. Here, uncertainty is handled within a receding horizon framework. It typically involves solving optimization problems online where both the objective and constraints reformulated probabilistically. Theoretical studies focus on ensuring stability and feasibility under uncertainty using techniques from convex and stochastic optimization. Additionally, approaches are classified into traditional model-based and data-driven approaches, where the latter is a nascent but growing field that synthesizes control laws directly from (noisy) data collected from the underlying system.Approaches vary in the treatment of the data towards control. For example, the behavioral approach utilizes trajectory data to represent the system, with which one can develop a controller without identification.
Distributional Control
Distributional control comprises methods aimed at controlling or steering probability distributions and handling model uncertainty via distributional methods. For example, optimal transport theory provides tools for quantifying distances between probability measures. This topic covers methods the theoretical insights with which the control design can steer a system from one distribution to the next or even robustified against uncertainty in the underlying probability distributions. Similarly, density steering is concerned with shaping the full probability distribution of a system’s state rather than focusing solely on its mean evolution. It encompasses techniques such as covariance steering, which directly manipulates the state covariance using simultaneous open-loop and feedback control. The problems in this area have deep connections with the theory of optimal transport and Schrodinger bridges, which provides a fruitful cross between pure mathematics and optimal control theory.
Stochastic Safety and Probabilistic Guarantees
Stochastic Safety comprises methods for ensuring system safety via safety filters and optimal control, and verifying reachability under stochastic disturbances. For example, stochastic reachability - this area deals with computing the probability that a stochastic system reaches (or avoids) particular sets. Its mathematical foundations typically involve solving a Hamilton-Jacob-Bellman (HJB) PDE or purely a dynamic programming problem to handle generalized stochasticity. Its application includes a myriad of settings, including but not limited to constructing control laws and safety filters. On the other hand, stochastic barriers extend the concept of a deterministic barrier function - which guarantees system safety by ensuring the underlying safe set is forward invariant - to the stochastic setting. The ultimate goal of stochastic barriers as either a control law or safety filter is to ensure system safety with high probability.
Applications in Aerospace
This track is dedicated to practically relevant algorithms which exploit and develop theoretical insights to be amenable for practical applications. It brings together methods from the previous tracks, opens the discussion applications where handling stochasticity would be crucial, and realizes their effectiveness on real-world problems. Areas include examples from aerospace, a core motivation for this workshop, such as low- thrust cislunar and interplanetary guidance, turbulence-affected quadrotor trajectory optimization, and precision rocket landing.
Speakers
Faculty
Biography
Professor Alessandro Abate is Professor of Verification and Control in the Department of Computer Science at the University of Oxford, a Fellow and Tutor at St. Hugh’s College, and a Faculty Fellow at the Alan Turing Institute in London.Born in Milan in 1978 and raised in Padua, he earned a Laurea degree in Electrical Engineering (summa cum laude) from the University of Padua in 2002, after spending periods of study at UC Berkeley and RWTH Aachen. He went on to complete an M.S. (2004) and a Ph.D. (2007) in Electrical Engineering and Computer Sciences at UC Berkeley, where he worked on Systems and Control Theory under Shankar Sastry. While at Berkeley, he also served as an International Fellow in the Computer Science Laboratory at SRI International in Menlo Park, California. After finishing his doctorate, he joined Stanford University’s Department of Aeronautics and Astronautics as a post-doctoral researcher, collaborating with Claire Tomlin on systems biology in affiliation with the Stanford School of Medicine. From June 2009 to mid-2013 he was an Assistant Professor at the Delft Center for Systems and Control, TU Delft, where he led a research group focused on the verification and control of complex systems. His research interests center on the analysis, formal verification, and control of heterogeneous and complex dynamical models—particularly stochastic hybrid systems—and their applications to cyber-physical systems, with an emphasis on safety-critical domains, energy systems, and biological networks.
Biography
Dr. Açıkmeşe received his M.S. in mechanical engineering and his Ph.D. in aerospace engineering from Purdue University. He was a technologist and a senior member of the Guidance and Control (G&C) Analysis Group at NASA Jet Propulsion Laboratory (JPL) from 2003 to 2012 and was a visiting Assistant Professor of Aerospace Engineering at Purdue University before joining JPL. At JPL, he developed guidance, control, and estimation algorithms for formation-flying spacecraft and distributed networked systems, proximity operations around asteroids and comets, and planetary landing, as well as developing interior point methods algorithms for the real-time solution of convex optimization problems. Dr. Açıkmeşe’s research developed a fundamental result, known as “lossless convexification”, that provides the solution of a general class of nonconvex optimal control problems via computationally tractable convex optimization methods. This theoretical insight led to a leap in the G&C technology that now made planetary pinpoint landing feasible. NASA has been investing on the demonstration of this technology to mature it for next generation missions to Mars and other planets. Dr. Açıkmeşe also worked on NASA missions. He was a member of NASA’s Mars Science Laboratory (MSL) G&C team, where he developed and delivered G&C algorithms used in the "fly-away phase" of the successful Curiosity rover landing in August 2012. He also developed Reaction Control System (RCS) algorithms for NASA’s SMAP (Soil Moisture Active Passive) mission, which launched in 2014.
Biography
Francesco Borrelli received the ‘Laurea’ degree in computer science engineering in 1998 from the University of Naples ‘Federico II’, Italy. In 2002 he received the PhD from the Automatic Control Laboratory at ETH-Zurich, Switzerland. He is currently a Professor at the Department of Mechanical Engineering of the University of California at Berkeley, USA. He is the author of more than one hundred fifty publications in the field of predictive control. He is author of the book Predictive Control published by Cambridge University Press, the winner of the 2009 NSF CAREER Award and the winner of the 2012 IEEE Control System Technology Award. In 2016 he was elected IEEE fellow. In 2017 he was awarded the Industrial Achievement Award by the International Federation of Automatic Control (IFAC) Council. Since 2004 he has served as a consultant for major international corporations. He was the founder and CTO of BrightBox Technologies Inc, a company focused on cloud-computing optimization for autonomous systems, acquired by Flex, Inc. in 2016. He was the co-director of the Hyundai Center of Excellence in Integrated Vehicle Safety Systems and Control at UC Berkeley. He is the co-founder of WideSense, Inc. a UC Berkeley spinoff focused on Mobility Contextual Intelligence. His research interests are in the area of model predictive control and its application to automated driving, robotics, food and energy systems.
Biography
Yongxin Chen was born in Ganzhou, Jiangxi, China. He received his BSc in Mechanical Engineering from Shanghai Jiao Tong university, China, in 2011, and a Ph.D. degree in Mechanical Engineering, under the supervision of Tryphon Georgiou, from University of Minnesota in 2016. He is currently an Associate Professor in the School of Aerospace Engineering at Georgia Institute of Technology. Before joining Georgia Tech, he had a one-year Research Fellowship in the Department of Medical Physics at Memorial Sloan Kettering Cancer Center with Allen Tannenbaum from 2016 to 2017 and was an Assistant Professor in the Department of Electrical and Computer Engineering at Iowa State University from 2017 to 2018. He received the George S. Axelby Best Paper Award (IEEE Transaction on Automatic Control) in 2017 for his joint work ‘‘Optimal steering of a linear stochastic system to a final probability distribution, Part I’’ with Tryphon Georgiou and Michele Pavon and the SIAM Journal on Control and Optimization Best Paper Award in 2023. He received the NSF CAREER Award in 2020, the Simons-Berkeley research fellowship in 2021, the A.V. ‘Bal’ Balakrishnan Award in 2021, and the Donald P. Eckman Award in 2022. He delivered plenary talks at the 2023 American Control Conference and the 2024 International Symposium on Mathematical Theory of Networks and Systems.
Biography
Florian Dörfler is a Professor at the Automatic Control Laboratory at ETH Zürich. He received his Ph.D. degree in Mechanical Engineering from the University of California at Santa Barbara in 2013, and a Diplom degree in Engineering Cybernetics from the University of Stuttgart in 2008. From 2013 to 2014 he was an Assistant Professor at the University of California Los Angeles. He has been serving as the Associate Head of the ETH Zürich Department of Information Technology and Electrical Engineering from 2021 until 2022. His research interests are centered around automatic control, system theory, optimization, and learning. His particular foci are on network systems, data-driven settings, and applications to power systems. He is a recipient of the distinguished young research awards by IFAC (Manfred Thoma Medal 2020) and EUCA (European Control Award 2020). He and his team received best paper distinctions in the top venues of control, machine learning, power systems, power electronics, circuits and systems. They were recipients of the 2011 O. Hugo Schuck Best Paper Award, the 2012-2014 Automatica Best Paper Award, the 2016 IEEE Circuits and Systems Guillemin-Cauer Best Paper Award, the 2022 IEEE Transactions on Power Electronics Prize Paper Award, the 2024 Control Systems Magazine Outstanding Paper Award, and multiple Best PhD thesis awards at UC Santa Barbara and ETH Zürich. They were further winners or finalists for Best Student Paper awards at the European Control Conference (2013, 2019), the American Control Conference (2010, 2016, 2024), the Conference on Decision and Control (2020), the PES General Meeting (2020), the PES PowerTech Conference (2017), the International Conference on Intelligent Transportation Systems (2021), the IEEE CSS Swiss Chapter Young Author Best Journal Paper Award (2022,2024), the IFAC Conferences on Nonlinear Model Predictive Control (2024) and Cyber-Physical-Human Systems (2024), and NeurIPS Oral (2024). He is currently serving on the council of the European Control Association and as a senior editor of Automatica.
Biography
Giancarlo Ferrari-Trecate received the Ph.D. degree in Electronic and Computer Engineering from the Università degli Studi di Pavia in 1999. Since September 2016 he is Professor at EPFL, Lausanne, Switzerland. In spring 1998, he was a Visiting Researcher at the Neural Computing Research Group, University of Birmingham, UK. In fall 1998, he joined as a Postdoctoral Fellow the Automatic Control Laboratory, ETH, Zurich, Switzerland. He was appointed Oberassistent at ETH, in 2000. In 2002, he joined INRIA, Rocquencourt, France, as a Research Fellow. From March to October 2005, he was researcher at the Politecnico di Milano, Italy. From 2005 to August 2016, he was Associate Professor at the Dipartimento di Ingegneria Industriale e dell’Informazione of the Università degli Studi di Pavia. His research interests include scalable control, microgrids, networked control systems, hybrid systems and machine learning. Giancarlo Ferrari Trecate was the recipient of the Researcher Mobility Grant from the Italian Ministry of Education, University and Research in 2005. He is currently a member of the IFAC Technical Committees on Control Design and Optimal Control, and the Technical Committee on Systems Biology of the IEEE SMC society. He has been serving on the editorial board of Automatica for nine years and of Nonlinear Analysis: Hybrid Systems.
Biography
Tryphon T. Georgiou is a UCI Distinguished Professor in Mechanical and Aerospace Engineering at the University of California, Irvine. He studied at the National Technical University of Athens, Greece (Diploma in Mechanical and Electrical Engineering, 1979), and the University of Florida, Gainesville (PhD 1983). Prior to joining the University of California, Irvine, he served on the faculty at the University of Minnesota, Iowa State University, and Florida Atlantic University. Dr. Georgiou has received the George S. Axelby Outstanding Paper Award of the IEEE Control Systems Society for the years 1992, 1999, 2003, and 2017, he is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE), a Fellow of the International Federation of Automatic Control (IFAC), and a Foreign Member of the Royal Swedish Academy of Engineering Sciences (IVA).
Biography
Dr. Kenshiro (Ken) Oguri is an Assistant Professor of Aeronautics and Astronautics at Purdue University. Ken's research interest includes orbital mechanics, control theory, stochastic systems, and optimization. At Purdue, he currently leads a research group of 13 graduate students. He has published more than 100 journal/conference papers in these fields. On the control-theoretic side, his research spans stochastic control, optimal control, nonlinear control, and optimization. On the space application front, his research addresses challenges in space exploration, navigation, and autonomy, in collaboration with NASA, JPL, AFOSR, Draper, and JAXA. His research has been recognized by NASA Early Career Faculty award and multiple paper awards. Prior to joining Purdue faculty in 2022, he worked at NASA JPL and JAXA. He received his Ph.D. from the University of Colorado Boulder in 2021, and M.S. and B.S. from the University of Tokyo in 2017 and 2015, respectively.
Biography
Dr. Tsiotras holds the David & Andrew Lewis Endowed Chair in the Daniel Guggenheim School of Aerospace Engineering at Georgia Tech. He is also associate director at the Institute for Robotics and Intelligent Machines. His current research interests include nonlinear and optimal control and their connections with AI, planning, and decision-making, emphasizing autonomous ground, aerial, and space vehicles applications. He has published more than 350 journal and conference articles in these areas. Prior to joining the faculty at Georgia Tech, Dr. Tsiotras was an assistant professor of mechanical and aerospace engineering at the University of Virginia. He has also held visiting appointments with the MIT, JPL, INRIA, Rocquencourt, the Laboratoire de Automatique de Grenoble, and the Ecole des Mines de Paris (Mines ParisTech). Dr. Tsiotras is a recipient of the NSF CAREER award, the IEEE Technical Excellence Award in Aerospace Controls, the Outstanding Aerospace Engineer Award from Purdue, the Sigma Xi President and Visitor’s Award for Excellence in Research, as well as numerous other fellowships and scholarships. He is currently the chief editor of the Frontiers in Robotics & AI, in the area of space robotics, and an associate editor for the Dynamic Games and Applications journal. In the past, he has served as an associate editor for the IEEE Transactions on Automatic Control, the AIAA Journal of Guidance, Control, and Dynamics, the IEEE Control Systems Magazine, and the Journal of Dynamical and Control Systems. He is a Fellow of the AIAA, IEEE, and AAS.
Postdocs/Students
Andrea Martin
Postdoctoral Researcher, KTH Royal Institute of Technology
Biography
Andrea Martin is a Postdoctoral Researcher at KTH Royal Institute of Technology and Digital Futures, where he works with Prof. Giuseppe Belgioioso and Prof. Mikael Johansson. He received his Ph.D. in Robotics, Control, and Intelligent Systems from EPFL in 2025, under the supervision of Prof. Giancarlo Ferrari Trecate, Prof. John Lygeros, and Prof. Florian Dörfler. During his doctoral studies, he was affiliated with the NCCR Automation. Before that, he obtained a B.Sc. in Information Engineering and two M.Sc. degrees in Automation Engineering and Automatic Control and Robotics from the University of Padova and the Polytechnic University of Catalonia through the TIME double degree program. His research interests lie at the intersection of control theory, optimization, and machine learning. He was awarded the Digital Futures Postdoctoral Fellowship in 2025.
András Sasfi
Doctoral Student, ETH Zurich
Biography
Andras Sasfi is a doctoral student with the Automatic Control Laboratory at ETH Zurich, Switzerland. He received his bachelor's degree in mechanical engineering from the Budapest University of Technology and Economics, Hungary, in 2019. He received his master's degree also in mechanical engineering from ETH Zurich in 2022. His research interests include system identification and data-driven control in the behavioral setting.
George Rapakoulias
Ph.D. Student , Georgia Tech
Biography
George Rapakoulias received his Engineering Diploma in Mechanical Engineering from the National Technical University of Athens (NTUA) in 2021. He is currently pursuing a Ph.D. in Machine Learning at the Georgia Institute of Technology under the supervision of Dr. Panagiotis Tsiotras. His research interests include generative AI and diffusion models, stochastic and mean-field control theory, and optimal transport. He is an Alexander Onassis Foundation scholar.
Rayan Mazouz
Ph.D. Candidate, University of Colorado Boulder
Biography
Rayan Mazouz is a doctoral candidate at the University of Colorado Boulder. His research focuses on the verification and control of stochastic nonlinear safety-critical systems – using stochastic barrier certificates. A central aim of his work is the principled integration of AI and robotics. He obtained his Master of Science degree in Aerospace Engineering from Delft University of Technology, The Netherlands. He was a Fulbright scholar and researcher in robotics at NASA’s Jet Propulsion Laboratory.
Tren M.J.T. Baltussen
Ph.D. Student , Eindhoven University of Technology
Biography
Tren Baltussen received the M.Sc. degree (cum laude) in Systems and Control from the Eindhoven University of Technology (TU/e) in 2024. His M.Sc. thesis was on learning-based control methods for motion planning of autonomous vehicles. He is currently working towards the Ph.D. degree in the Control Systems Technology group with the Department of Mechanical Engineering at TU/e under the supervision of professor Maurice Heemels and professor Alexander Katriniok. His research interests include model predictive control (MPC) of autonomous systems subject to uncertainty and dual control theory.
David Leeftink
Ph.D. Candidate , Radboud University
Biography
David Leeftink is a PhD candidate at the Donders Institute for Brain, Cognition and Behaviour at Radboud University. Co-supervised by Dr. Max Hinne and Prof. Marcel van Gerven, his research focuses on the intersection of optimal control theory and probabilistic machine learning for decision-making under uncertainty. Specifically, he develops planning and optimization methods for systems with learned, large-scale dynamics models. He holds an M.Sc. in Artificial Intelligence from Radboud University.
Riccardo Cescon
Ph.D. Student , EPFL Lausanne
Biography
Riccardo is a PhD Student at the Automatic Control Lab at EPFL Lausanne under the supervision of Prof. Ferrari-Trecate. He received a BSc. in Information Engineering and a MSc. in Control Systems Engineering from the University of Padova in 2020 and 2022. During his master’s thesis he spent six months at ETH Zürich as a visiting student at the Institut für Automatik under the supervision of prof. Florian Dörfler. His current research interests include distributionally robust optimization methods with application to control and machine learning.
Steven Adams
Ph.D. Student , TU Delft
Biography
Steven Adams received the M.Sc. degree in Econometrics and Operations Research in 2020 from VU Amsterdam, and the M.Sc. degree in Systems and Control in 2021 from the Delft Center for Systems and Control (DCSC), Delft University of Technology (TU Delft). He is currently pursuing a Ph.D. degree with DCSC, TU Delft. His main research interests include stochastic systems, formal methods, and machine learning.
Eduardo Figueiredo Mota Diniz Costa
Ph.D. Candidate , TU Delft
Biography
Eduardo Figueiredo is a Ph.D. candidate at the Delft Center for Systems and Control, Delft University of Technology. He received his B.S. and M.S. degrees in Engineering from the University of São Paulo, Brazil, and École Polytechnique, France (2019).
Naoya Kumagai
Ph.D. Student, Purdue University
Biography
Naoya Kumagai is a PhD student in the School of Aeronautics and Astronautics at Purdue University. He received his B.S. in Applied Mathematics from the University of California, Los Angeles in 2022 and his M.S. in Aeronautics and Astronautics from Purdue University in 2024. His current research focuses on stochastic optimal control theory and its applications to robust control of spacecraft. He was a visiting researcher at the Jet Propulsion Laboratory in 2024 and an intern in the GNC team at ispace Japan in 2025.
Aman Tiwary
Ph.D. Student , University of New Mexico
Biography
Aman Tiwary is a Ph.D. student in Electrical and Computer Engineering at the University of New Mexico, advised by Prof. Meeko Oishi in the Human-Centered Systems and Control Laboratory. He received his M.S. in Mechanical Engineering from the University of Washington, where he worked with Prof. Behçet Açıkmeşe in the Autonomous Controls Laboratory. He completed his B.Tech. in Mechanical Engineering at Birla Institute of Technology, Mesra, in 2017, where he worked with Dr. Sudip Das in the Department of Space Engineering and Rocketry. His research interests include optimization-based motion planning, stochastic planning and control, autonomous multi-agent systems, and real-time guidance algorithms.
Schedule
Morning Session I: Stochastic Control and Safety I
8:30am - 8:35am
5 minWorkshop Opening Remarks
8:35am - 9:10am
35 minLearning-Based Data-Driven MPC: Robustness and Adaptation Under Uncertainty
Abstract
Recent advances in stochastic and data-driven Model Predictive Control (MPC) provide new tools for controlling complex systems subject to uncertainty and limited information. In this talk, I will discuss approaches that merge robust control with learning from data to address challenges arising from stochastic disturbances, partial observability, and modeling errors. The focus will be on three perspectives: (i) iterative learning MPC, where data from repeated executions is used to build invariant sets and safe policies with guaranteed stability and constraint satisfaction; (ii) the role of sampling and discretization in robust MPC, and how generalized invariance concepts enable adaptive sampling strategies without compromising safety; and (iii) stochastic MPC in partially observable environments, where Hidden Markov Models and chance constraints are used to reason about uncertainty in environment modes.
9:10am - 9:45am
35 minNeural Proofs for Sound Verification and Control of Complex Systems
Abstract
I discuss the construction of sound proofs for the formal verification and control of complex stochastic models of dynamical systems and reactive programs. Neural proofs are made up of two parts. Proof rules encode requirements for the verification of general temporal specifications over the models of interest. Certificates are then constructed from said proof rules with an inductive approach, namely accessing samples from the dynamics and training neural nets, whilst generalising such networks via SAT-modulo-theory queries, based on the full knowledge of the models. In the context of sequential decision making problems over stochastic models, I discuss how to additionally generate policies/strategies/controllers, in order to formally attain given specifications.
9:45am - 10:00am
15 minGrad Student Lightning Round 1
Density Control of Gaussian Mixtures Models: From Controlling Large Populations to Generative AI.
George Rapakoulias
We present a novel framework for controlling the state distribution of linear dynamical systems, in the special case that this can be expressed as a Gaussian Mixture Model. The central object is the approximation of the solution to the Schrödinger Bridge problem with GMM marginal distributions, an optimal diffusion process transporting one Gaussian mixture into another at minimal cost. Classical approaches to solve SBs often rely on spatial discretizations or costly neural network training, requiring considerable computation even in low dimensional problems. Building on recent advances in the fields of computational optimal transport and distributional control, we introduce a closed-form parameterization of a feasible set of solutions to the SB with GMM boundary distributions and approximate the solution within this feasible set by solving a linear program. This yields numerically efficient, interpretable, and constraint-aware control laws, enabling applications from steering interacting multi-agent populations to lightweight generative AI models.
Julia Toolbox for Stochastic Barrier Functions in Safety Verification of Stochastic Systems
Rayan Mazouz
We introduce StochasticBarrier.jl, an open-source Julia-based toolbox for generating Stochastic Barrier Functions to verify safety in discrete-time stochastic systems with additive Gaussian noise. The framework supports linear, polynomial, and piecewise affine dynamics, enabling verification of general nonlinear systems. The tool implements both sum-of-squares optimization and piecewise constant approaches, the latter offering three engines: two based on (dual) linear programming and one on gradient descent. Benchmarking across many case studies demonstrates that the tool outperforms state-of-the-art methods in computation time, scalability, and conservativeness of safety probability bounds.
10:00am - 10:30am
30 minCoffee Break
Morning Session II: Distributional Control
10:30am - 11:05am
35 minSchrödinger Bridges: Old and New
Abstract
In 1931 Erwin Schrödinger published a paper with the title "Über die Umkehrung der Naturgesetze" (On the Reversal of the Laws of Nature), where he explored the time reversal of the law of a diffusion process and its implications when conditioning the law to satisfy specified marginals at two points in time. The law of the conditioned process, with time-marginals that interpolate the specified end-point marginals, came to be known as a Schrödinger bridge. Schrödinger's ideas linked a rather broad spectrum of concepts that, in modern language, include the relative entropy between probability laws, likelihood estimation, large deviations theory, stochastic optimization and Monge-Kantorovich optimal mass transport. The aim of the presentation is to overview the mathematics and applications of SBs in control theory and related fields.
11:05am - 11:40am
35 minData-driven and Robust Distribution Steering
Abstract
Control of uncertainty through feedback is at the heart of control theory. Recently, there has been a paradigm shift in the way uncertainty is handled through the propagation (and, in particular) the control of the whole distribution of the state trajectories, beyond just the mean. Many results have been reported over the last decade primarily focusing on linear systems, collectively referred to as “covariance steering.” Standard covariance or distribution steering, however, assumes a priori knowledge of the model of the process to be controlled. In this talk, we pose and solve the covariance steering problem using raw, noisy input-output data collected from the underlying system. We will present a novel framework that simultaneously characterizes the noise affecting the measured data and designs an optimal affine-feedback controller to steer the density of the state to a prescribed terminal value. The first and second moment steering problems are then solved to optimality using techniques from robust control and robust optimization. We will also discuss extensions of covariance steering subject to distributional uncertainty of the noise process in terms of Wasserstein ambiguity sets subject to distributionally robust CVaR constraints for the transient motion of the state. In all cases, the controller synthesis requires the solution of a semi-definite optimization program, which can be solved efficiently using standard off-the-shelf solvers.
11:40am - 12:15pm
35 minDistributionally robust LQ control: from a pool of samples to the design of dependable controllers
Abstract
This talk addresses Linear–Quadratic (LQ) control design when only partial statistical information about the disturbances affecting the system is available. Specifically, given a finite set of noise samples, we consider the problem of synthesizing an optimal control policy that offers provable safety and performance guarantees despite distributional uncertainty. To mitigate poor out-of-sample performance, we adopt distributionally robust optimization (DRO) formulations based on ambiguity sets centered at the empirical uncertainty distribution. In the first part of the talk, we model this probabilistic mismatch using Wasserstein ambiguity sets and revisit DRO duality results to derive convex reformulations of infinite-horizon LQ control problems with safety constraints and bounded-support uncertainties. In the second part of the talk, we introduce an entropic regularization term in the transport plan and explore ambiguity sets defined in terms of Sinkhorn’s discrepancy—a framework not yet exploited in the control literature. This framework addresses two main limitations of data-driven Wasserstein DRO: (i) it does not constrain the worst-case distribution to be discrete, and (ii) it smooths the dual objective, enabling efficient implementation for a broad class of objective functions. In the final part of the talk, we outline perspectives for extending this approach beyond LQ control, leveraging gradient-based methods to design distributionally robust control policies.
12:15pm - 1:45pm
1 hr 30 minAfternoon Session I: Stochastic Control and Safety II
1:45pm - 2:20pm
35 minSafety Assurance of Stochastic Systems
Abstract
Safety is a critical requirement for real-world systems, including autonomous vehicles, robots, power grids and more. Over the past decades, many methods have been developed for safety verification and safe control design in deterministic systems. However, real-world applications often involve not only worst-case deterministic disturbances but also stochastic uncertainties, rendering deterministic methods insufficient. In this talk, I will present an effective framework that address this challenge by decoupling the effects of stochastic and deterministic disturbances. At the heart of this framework is a novel technique that provides probabilistic bounds on the deviation between the trajectories of stochastic systems and their deterministic counterparts with high confidence. This approach yields a tight probabilistic bound that is applicable to both continuous-time and discrete-time systems. By leveraging this bound, the safety verification problem for stochastic systems can be reduced to a deterministic one, enabling the use of existing deterministic methods to solve problems involving stochastic uncertainties. I will demonstrate the effectiveness of this framework through several safety verification and safe control tasks.
2:20pm - 2:55pm
35 minGaussian Behaviors: Representations and Data-Driven Control
Abstract
We propose a modeling framework for stochastic systems based on Gaussian processes. Finite-length trajectories of the system are modeled as random vectors from a Gaussian distribution, which we call a Gaussian behavior. The proposed model naturally quantifies the uncertainty in the trajectories, yet it is simple enough to allow for tractable formulations. We relate the proposed model to existing descriptions of dynamical systems including deterministic and stochastic behaviors, and linear time-invariant (LTI) state-space models with Gaussian process and measurement noise. Gaussian behaviors can be estimated directly from observed data as the empirical sample covariance under the assumption that the measured trajectories are from independent experiments. The distribution of future outputs conditioned on inputs and past outputs provides a predictive model that can be incorporated in predictive control frameworks. We show that subspace predictive control (SPC) is a certainty-equivalence control formulation with the estimated Gaussian behavior. Furthermore, the regularized data-enabled predictive control (DeePC) method is shown to be a distributionally optimistic formulation that optimistically accounts for uncertainty in the Gaussian behavior. To mitigate the excessive optimism of DeePC, we propose a novel distributionally robust control formulation, and provide a convex reformulation allowing for efficient implementation.
2:55pm - 3:30pm
35 minGrad Student Lightning Round 2
Safe Dual Control through MPC
Tren M.J.T. Baltussen
Dual control addresses the fundamental trade-off between regulation and information acquisition in uncertain dynamical systems. While Model Predictive Control (MPC) provides a principled framework for constrained and safety-critical control, existing robust and stochastic MPC schemes often neglect the dependence of the posterior distribution of the state, i.e. the hyperstate, on the control policy itself. This separation limits their ability to capture the dual control effect, which is of particular interest in interactive environments. Our research develops MPC methods that explicitly incorporate the dual control effect, e.g., in Gaussian Process-based MPC. While effective as a heuristic, safety guarantees of GP-MPC are hindered by modeling errors. To address this, we present a multi-horizon, contingency MPC framework that systematically integrates learning-based models with robust constraint satisfaction. This approach ensures recursive feasibility and safety while mitigating conservatism by exploiting the receding horizon of MPC. We demonstrate our method on use cases in automated driving. This work is a first step toward theoretically supported, dual model predictive control methods for safety-critical systems.
Optimal Control of Probabilistic Dynamics Models via Mean Hamiltonian Minimization
David Leeftink
A central question in learning-based control is how to plan effectively under learned models from limited data. In this talk, we explore this question by building a bridge from classical optimal control theory to modern deep reinforcement learning. We introduce a practical framework based on the principle of mean Hamiltonian minimization, where we plan over an entire ensemble of learned dynamics models simultaneously. This approach provides a structured and effective alternative to common trajectory optimization methods. I will present our method, show its effectiveness on several nonlinear control problems, and discuss its potential as a robust foundation for building reliable decision-making algorithms.
Distributional Robust Control
Riccardo Cescon
Distributionally Robust Optimization (DRO) has recently gained attention in control as a principled approach to handle uncertainty. Classical controllers’ performance often degrade when the noise distribution deviates from the nominal one, whereas DRO provides robustness by optimizing against the most adverse distribution within an ambiguity set. Within this framework, the Wasserstein distance is the most common choice to define such sets, but it presents key limitations: the worst-case distribution is finitely supported when the nominal one is, and tractability in control problems typically requires restrictive assumptions such as linear dynamics. This talk introduces the Sinkhorn divergence as an alternative for constructing ambiguity sets. We demonstrate its applicability to standard control problems, showing how it overcomes some limitations of Wasserstein-based approaches. Finally, we outline extensions toward nonlinear control, where gradient-based optimization techniques enable the design of distributionally robust policies beyond the linear setting.
Formal Uncertainty Propagation in Wasserstein Distance
Steven Adams and Eduardo Figueiredo Mota Diniz Costa
Modern control systems face uncertainty both in their dynamics and in the probabilistic models used to describe them. Propagating such uncertainty through nonlinear stochastic systems is typically intractable. We present a framework for formal uncertainty propagation that combines optimal transport, distributionally robust optimization, and quantization. By modeling "uncertainty of uncertainty" as Wasserstein balls of distributions, our method enables efficient and guaranteed propagation of probabilistic sets through nonlinear dynamics. We demonstrate our open-source implementation (https://github.com/sjladams/WassProp) on a benchmark example during the demo.
3:30pm - 4:00pm
30 minCoffee Break
Afternoon Session II: Applications in Aerospace
4:00pm - 4:35pm
35 minRobust Trajectory Planning in the presence of state and control dependent uncertainties
4:35pm - 5:10pm
35 minStochastic Planning and Control for Space Applications
Abstract
Safety assurance is critical in autonomous vehicle operation. Yet, this principle is significantly challenged in space, where vehicles must operate autonomously in uncertain, nonlinear environments with sparse human interactions. Such operations require constrained planning and control under uncertainty for long horizon. The demand for such capabilities will only increase as we expand the frontier of our exploration across and beyond the solar system.In this talk, I will present challenges in space exploration from a control-theoretic perspective, relevant recent results from my group, and opportunities for control theory to make differences at the forefront of space exploration. The talk will cover topics that merge stochastic control with uncertainty quantification and optimization to address key challenges in spacecraft autonomy and trajectory optimization under uncertainty.
5:10pm - 5:25pm
15 minGrad Student Lightning Round 3
Chance-Constrained Stochastic Trajectory Optimization for Spacecraft in Nonlinear Dynamical Systems
Naoya Kumagai
As humanity's presence in space expands, current mission design and operation practices may not scale to more frequent and ambitious missions within limited budgets. The state of the practice requires iteration between separate software that each perform the task of trajectory optimization and uncertainty quantification. To pave the path towards more automated mission design, we leverage stochastic optimal control techniques in order to handle, within a unified framework, trajectory optimization and common uncertainties such as stochastic acceleration and navigation error. We demonstrate an example of a transfer in the near-moon region where the framework modifies the original trajectory in highly nonlinear phases by considering uncertainty models.
Sequential Convex Programming for Stochastic Multi-agent Systems with Connectivity Constraints
Aman Tiwary
In this work, we present an optimization-based motion planning framework to generate feasible trajectories for a stochastic multi-agent system (MAS), navigating through a cluttered environment. The objective is to ensure global connectivity among the agents while avoiding both inter-agent collisions and obstacles, with at least a desired likelihood. The traditional approach to trajectory planning for multiple agents in a deterministic setting is to embed connectivity constraints within a mixed integer programming (MIP) formulation. However, when the MAS is subject to stochastic disturbances, the trajectories must satisfy the constraints with a specified probability. A natural way to model this is through chance constraints, but this makes the problem considerably more complex. The primary challenges arise from the integer constraints induced by connectivity requirements, the chance constraints introduced by the stochastic nature of the problem, and the interaction between the two. To address these challenges, we introduce two key relaxations. First, we propose a continuous reformulation of the connectivity constraint via complementarity conditions, leading to a purely continuous formulation. Second, the chance constraints are handled through a scenario-based relaxation, where independent scenarios are sampled from the distribution and incorporated into the optimization problem to ensure probabilistic feasibility of the solution. The resulting optimization problem is then solved using a continuous-time successive convexification framework, which guarantees convergence to feasible trajectories and continuous time satisfaction of constraints.
5:25pm - 5:30pm
5 minClosing Remarks
Organizers
Biography
Vignesh Sivaramakrishnan is a Postdoctoral Fellow at the Air Force Research Laboratory through the Air Force Science & Technology Fellowship Program administered by the National Academy of Sciences, National Research Council (NRC). He currently conducts fundamental research on advanced control/planning and uncertainty quantification algorithms with applications to aerospace systems. He received his B.S. in Mechanical Engineering in 2017 from the University of Utah and his Ph.D. in Electrical and Computer Engineering in 2024 from the University of New Mexico. He has been a visiting graduate researcher at the Air Force Research Laboratory in 2018 as well as a summer intern at the Jet Propulsion Laboratory in 2015, on Project Starshade, and 2016, on InSight. His research interests include stochastic optimal control/planning and uncertainty quantification, with applications to real-world systems.
Joshua Pilipovsky
RTX Technology Research Center (RTRC)
Biography
Joshua Pilipovsky is a Senior Research Engineer at RTX Technology Research Center (RTRC). He received the B.S., M.S., and Ph.D. degrees in Aerospace Engineering from the Georgia Institute of Technology in 2019, 2021, and 2025, respectively. He has held Guidance, Navigation, and Control (GN&C) and software engineering positions with Raytheon Technologies from 2021-2024. His current research interests lie broadly at the intersection of control theory, optimization, and learning, with topics including stochastic optimal control, distributionally robust control, data-driven control and uncertainty quantification, with applications to aerial and space vehicle autonomy.
Alex Soderlund
Air Force Research Laboratory
Biography
Alexander Soderlund is a research aerospace engineer at the Space Vehicles Directorate at Kirtland Air Force Base. He graduated in 2020 from The Ohio State University with a Ph.D. in Aerospace Engineering under the advisement of Dr. Mrinal Kumar. He then worked as a National Research Council postdoctoral research fellow for AFRL developing autonomous rendezvous and docking algorithms until assuming a civilian researcher role in late 2022 within the space control branch. His work largely focuses on enabling local onboard autonomy for satellites in the hazardous space domain. Research areas include multi-agent coordination, autonomous decision-making, assurance of safety while maneuvering, and enabling shared tactical awareness for distributed system elements (e.g., ground systems, space force operators, and satellites).
Biography
Sean Phillips is a Senior Mechanical Engineer and Technology Advisor of the Space Control Branch at the Air Force Research Laboratory. He holds the title of Research Assistant Professor (LAT) at the University of New Mexico in Albuquerque, NM. He received his Ph.D in the Department of Computer Engineering at the University of California – Santa Cruz in 2018. He received his B.S. in Mechanical Engineering from the University of Arizona in 2011 and his M.S. in Mechanical Engineering from the same university specializing in Dynamics and Controls in 2013. In 2009, he joined the Hybrid Dynamics and Controls Lab where he received a NASA Space Grant in 2010. In 2010, he received an Undergraduate Research Grant from the University of Arizona Honors College. He received a Space Scholars Internship at the Air Force Research Laboratory in Albuquerque NM during the summers of 2011, 2012 and 2017. In 2014, he joined the Hybrid Systems Laboratory at the University of California, Santa Cruz. In 2017, he received the Jack Baskin and Peggy Downes-Baskin Fellowship for his research on autonomous networked systems from the Baskin School of Engineering at the University of California Santa Cruz.
Outreach
Our goal is to promote the dissemination of ideas between researchers from both academia and industry, representing institutions from different geographical regions including North America and Europe. In doing so, our outcomes are twofold. First, we wish to aid practitioners in realizing tools from academia by providing practical examples—especially in aerospace—to motivate fundamental research in academia. In addition, we believe that when tools from a field are accessible to researchers, the field as a whole is enriched. Last but not least, the spotlight sessions are specifically designed to provide opportunities for Ph.D. students and postdoctoral fellows to present their work. Doing so allows them to engage with established experts in the field and obtain crucial feedback from the speakers as well as attendees at large. We believe that having new and early career researchers present will nurture the next generation of researchers in stochastic planning and control.