Plenary Talks

There are three plenary talks:

  • Luca Carlone:
  • Randal Beard
  • Renato Zanetti

See below for more details. For the schedule, see the conference program.

Plenary Talk 1: Certifiable Estimation for Robots and Autonomous Vehicles: From Robust Algorithms to Robust Systems

Luca Carlone

Luca Carlone is the Leonardo Career Development Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the MIT Laboratory for Information & Decision Systems (LIDS).

He is the director of the MIT SPARK Lab, where he works at the cutting edge of robotics and autonomous systems research. His goal is to enable human-level perception and world understanding of mobile robotics platforms (drones, self-driving vehicles, ground robots) operating in the real world. Towards this goal, his work involves a combination of rigorous theory and practical implementations. In particular, his research interests include nonlinear estimation and probabilistic inference, numerical and distributed optimization, and geometric computer vision applied to sensing, perception, and decision-making in single and multi-robot systems.

Human-level perception will increase reliability in safety-critical applications of robotics and autonomous vehicles (including self-driving cars and robots for disaster response), and increase efficiency and effectiveness in service robotics and consumer applications (manufacturing, healthcare, domestic robotics, augmented reality).


Spatial perception — the robot’s ability to sense and understand the surrounding environment— is a key enabler for autonomous systems operating in complex environments, including self-driving cars and unmanned aerial vehicles. Recent advances in perception algorithms and systems have enabled robots to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, researchers and practitioners are well aware of the brittleness of existing perception systems, and a large gap still separates robot and human perception.This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how it can lead to safer systems. The second effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.

Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over and extended period of time.

Plenary Talk 2: Vision-Based Tracking with Small UAVs

Randal Beard

Randal W. Beard received the B.S. degree in electrical engineering from the University of Utah, Salt Lake City in 1991, the M.S. degree in electrical engineering in 1993, the M.S. degree in mathematics in 1994, and the Ph.D. degree in electrical engineering in 1995, all from Rensselaer Polytechnic Institute, Troy, NY. Since 1996, he has been with the Electrical and Computer Engineering Department at Brigham Young University, Provo, UT, where he is currently a professor. In 1997 and 1998, he was a Summer Faculty Fellow at the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA. In 2006 and 2007 he was a visiting research fellow at the Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL. His primary research focus is autonomous control of small air vehicles and multivehicle coordination and control. He is a past associate editor for the IEEE Transactions on Automatic Control, the IEEE Control Systems Magazine, and the Journal of Intelligent and Robotic Systems. He is a fellow of the IEEE, and an associate fellow of AIAA.


This talk will describe our current work on vision based autonomous target tracking and following using small UAVs. We will present a new multiple target tracking algorithm that is based on the random sample consensus (RANSAC) algorithm that is widely used in computer vision. A recursive version of the RANSAC algorithm will be discussed, and its extension to tracking multiple dynamic objects will be explained. The performance of R-RANSAC will be compared to state of the art target tracking algorithms in the context of problems that are relevant to UAV applications. We will also discuss recent research on vision based relative pose estimation. We will describe a technique for using point correspondences in video to estimate the camera pose, where the cost function to be optimized is derived from the epipolar constraint. At each iteration, the estimated incremental pose is used to construct the Essential matrix, and the Levenberg-Marquardt (LM) algorithm is used to optimize the associated Sampson error.

Plenary Talk 3: Statistical Estimation: Model-Based, Data-Driven, and Hybrid approaches

Renato Zanetti

Renato joined the Department or Aerospace Engineering and Engineering Mechanics in January 2017. Prior to joining UT, Renato worked for almost a decade in the private and government sectors. From 2007 to 2013 Renato was an engineer at the Charles Stark Draper Laboratory. During this time he worked on every current and planned manned NASA vehicle: the International Space Station (ISS), the Space Shuttle, and Orion. He has been heavily involved in designing Orion navigation since the start of his professional career. Renato was the lead relative navigation designer for Orbital Sciences Cygnus vehicle, which completed several successful cargo resupply missions to the ISS. In 2013 he served as the lead of the Vehicles Dynamics and Control group at Draper. In this role he provided overall management and direction to ten engineers and several student interns/fellows.From 2013 to 2017 Renato was an engineer at the NASA Johnson Space Center (JSC). During this time he served as one of the lead designers of the absolute navigation filter for Orion EFT1, which successfully flew in December 2014. He was responsible for the design, coding, and testing of two navigation Computer Software Units (CSUs).  During the EFT1 flight, he monitored the navigation telemetry from the engineering support room in Denver (Raptor). Prior to departing from NASA, Renato delivered the design and code of three CSUs for Orion’s next flight: Exploration Mission 1 (later renamed Artemis 1).

Renato is a Fellow of the American Astronautical Society (AAS) an Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA), a member of the Institute of Electrical and Electronics Engineers and of the International Society of Information Fusion. He is the chair of the AIAA Astrodynamics Technical Committee and a former chair of the AAS Space-Flight Mechanics Technical Committee, which organizes two astrodynamics technical conferences every year. Renato and the Orion GN&C group received the prestigious NASA Software of the year award in 2015. He is also the recipient of a NASA Technical Excellence Award for outstanding achievement in Orion navigation design, two NASA On the Spot awards, and several Team and Group Achievements Awards.


An estimator is a function that, given a measurement as an input, returns as an output an estimate of the state of the system. In linear estimators, such as the Kalman Filter (KF), the measurement appears linearly, i.e. it is scaled by a deterministic gain. The Kalman filter is a model-based estimator; its functional form is determined by a physical model of the measurement and the dynamics. The advantage of a model-based design is the reliance on known physical principles to predict outcomes in corners of the state-space that might seem very unlikely to occur a priori. The disadvantage of model-based estimation is that any model mismatch will bias our estimate towards “what we think” should be happening rather than “what actually” happens.An alternative approach to model-based estimation and prediction are data-driven methodologies, where the functional form of the estimator is determined from data rather than models. The most classic data-driven methodology is regression using polynomial functions or splines. In regression, for example linear regression, the relation between a set of measurements and their associated “true” states is assumed linear and the coefficients of linearity (slope and intercept) are determined to minimize the square of the error. Once the slope and intercept are calculated, we can deploy the regressor as an estimator: feed it a measurement whose associated true state we don’t know, and extract an estimate of it. Statistical tools to aid regression are cross-validation and the bootstrap, shrinkage methods and regularization.

In the context of nonlinear regression, the regression coefficients cannot be always calculated in closed form, but numerical recursive methods of least-squares minimization are employed. These methods include Gradient Descent, Gauss-Newton, and Levenberg-Marquart. From the point of view of Statistical Learning, supervised Machine Learning (ML), is a type of regression in which the functional form of the estimator is a cascading combination of nonlinear activation functions. Once we have established the functional form of the estimator we use data and a nonlinear optimizer to calculate the optimal parameters of the estimator. ML techniques often use Gradient Descent or Stochastic Gradient Descent (only a random subset of the data is used to calculate the gradient) as an optimizer and rely strongly on statistical tools such as cross-validation and shrinkage to avoid over-fitting the training data. The advantage of data-driven methodologies such as supervised machine learning is that it does not bias the solution towards an assumed model of the truth; while the disadvantage is that only data close to that seen during training is likely to produce meaningful estimates, while unseen cases have the potential of failing in spectacular fashion. This fact typically implies the need of a large and rich training set and a comprehensive training session.

This talk will discuss model-based and data-driven approaches and briefly touch on current research to merge the two.