Workshops and Tutorials
Time Zones Around the World
The conference takes place 02:00 pm (14:00) to 05:00 pm (17:00) UTC. This is equivalent to
- 07:00 am – 10:00 am Pacific Time (US West Coast)
- 10:00 am – 01:00 pm Eastern Time (US East Coast)
- 03:00 pm – 06:00 pm British Summer Time
- 04:00 pm – 07:00 pm Central European Time
- 05:00 pm – 08:00 pm Eastern European Time
- 10:00 pm – 01:00 am China Standard Time & Australian Western Time
- 11:00 pm – 02:00 am Korean and Japanese Time
- 11:30 pm – 2:30 am Australian Central Standard Time
- Explainable Machine Learning: Overview of Methods and Application (Nina Schaaf)
- Machine Learning in Robotics (Kilian Kleeberger)
- RGB-D Odometry and SLAM (Javier Civera)
- Robust Kalman Filtering (Florian Pfaff and Benjamin Noack)
- An Introduction to Non-linear State Estimation with Discrete Filters (Felix Govaers)
- Fusion of Gyroscope and Accelerometer Data With Hands-On Part (Renaldas Urniezius)
- Multi-Sensor Fusion Meets Deep Learning: Challenges and Opportunities (Sen Wang and Yan Zhuang)
Explainable Machine Learning: Overview of Methods and Application (Nina Schaaf)
In recent years, artificial intelligence has become a key technology in many application areas, e.g., in manufacturing, health or finance. Deep-learning approaches, trained on huge data sets, are now able to discover highly complex correlations and thus make very accurate decisions. Such algorithms are also known as “black boxes” because it is impossible for humans to understand their complex decision-making processes. However, for some applications not only accurate predictions but also the trust in the algorithms is of enormous importance. Examples are safety-critical domains such as autonomous driving or the medical sector. In such application areas, it is important to accompany critical decisions by explanations.
The tutorial starts with a general overview of the research field “explainable AI”. This includes motivation, definitions, algorithms, evaluation techniques, and open research questions. The second part of the tutorial covers an application-related in-depth introduction to some relevant explainability-techniques. For this purpose, a selection of techniques is examined and compared by means of sample applications.
The overall goal of the tutorial is to introduce the topic “explainable AI” and to provide examples of possible applications
Nina Schaaf works as research associate at Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart. In 2018, she received her M.Sc. degree from Stuttgart Media University. She already dealt with the topic “explainable AI” during her master’s thesis and is now intensifying her efforts in her doctoral thesis. She has already held first talks and training sessions on the topic
Machine Learning in Robotics (Kilian Kleeberger)
- Get an overview of the state-of-the-art in AI-based robotic solutions
- Understand the usefulness of simulations for robot learning tasks
- Understand concepts for bridging the “reality gap” in order to deploy models from the
simulation to the real world
- Understand possible application areas of machine learning in robotics
- Understand challenges in robot learning (intersection of machine learning and robotics)
Short summary of the material to be presented:
- Short introduction to machine learning
- Short introduction to robotics
- Overview of typically used sensor systems
- Robot vision: Making use of developments in computer vision for robotic applications
- Deep reinforcement learning for robotics – recent achievements
- Introduction to simulations and techniques for a robust sim-to-real transfer
- Simple code examples are provided during the tutorial
- Best practice examples from Fraunhofer IPA
- Research associate at the department Robot and Assistive Systems at Fraunhofer IPA in Stuttgart, Germany
- University degree in international production engineering and management
- Research in machine learning / artificial intelligence and robotics (bin-picking)
- Contact information: email@example.com; +49 711 970-1191
- Previous experience:
- Organizer of the “Object Pose Estimation Challenge for Bin-Picking” at IROS 2019;
Further information: http://www.bin-picking.ai/en/competition.html
- Organizer and trainer of the machine learning expert training course “Cognitive
Robotics” at Fraunhofer IPA; Further information:
- Organizer of the “Object Pose Estimation Challenge for Bin-Picking” at IROS 2019;
RGB-D Odometry and SLAM (Javier Civera)
The emergence of modern RGB-D sensors, combining photometric and depth information, had a significant impact in many application fields, including robotics, augmented reality (AR) and 3D scanning. They are low-cost, low-power and low-size alternatives to traditional range sensors such as LiDAR. Moreover, unlike RGB cameras, RGB-D sensors provide the additional depth information that removes the need of frame-by-frame triangulation for 3D scene reconstruction. These merits have made them very popular in mobile robotics and AR, where it is of great interest to estimate egomotion and 3D scene structure. Such spatial understanding can enable robots to navigate autonomously without collisions and allow users to insert virtual entities consistent with the image stream. In this tutorial, we will review common formulations of odometry and Simultaneous Localization and Mapping (known by its acronym SLAM) using RGB-D stream input. The two topics are closely related, as the former aims to track the incremental camera motion with respect to a local map of the scene, and the latter to jointly estimate the camera trajectory and the global map with consistency. In both cases, the standard approaches minimize a cost function using nonlinear optimization techniques. We will cover mainly three aspects: In the first part, we introduce the basic concept of odometry and SLAM and motivate the use of RGB-D sensors. We will also give mathematical preliminaries relevant to most odometry and SLAM algorithms. In the second part, we detail the three main components of SLAM systems: camera pose tracking, scene mapping and loop closing. For each component, we describe different approaches proposed in the literature. In the final part, we will provide a brief discussion on the expected performance and limitations of current algorithms, and will review advanced research topics with the references to the state-of-the-art.
Javier Civera was born in Barcelona, Spain, in 1980. He received his Industrial Engineering degree in 2004 and the Ph.D. degree in 2009, both from the University of Zaragoza in Spain. He is currently an Associate Professor at the University of Zaragoza, where he teaches courses in computer vision, machine learning and control engineering. He has participated in and leaded several EU-funded, national and technology transfer projects related with vision and robotics and has been funded for research visits to Imperial College (London) and ETH (Zürich). He has coauthored more than 45 publications in top conferences and journals, receiving more than 4,200 references (GoogleScholar). He has served as Associate Editor at IEEE T-ASE, IEEE RA-L, IEEE ICRA and IEEE/RSJ IROS. Currently, his research interests are in the use of multi-view geometry and machine learning to produce robust and real-time visual SLAM technologies for robotics, wearables and AR applications.
Robust Kalman Filtering (Florian Pfaff and Benjamin Noack)
The optimality of the Kalman filter does not only depend on an accurate, linear model but also on perfectly known parameters of the prior and noise distributions. This requirement is not special to the Kalman filter but is rather an inherent problem deeply rooted into Bayesian filtering and, in parts, also frequentist statistics. The attendants will learn how this problem can be overcome by using hybrid approaches that rely on a combination of stochastic and set-membership methods. The approach is thoroughly explained along with solutions to new challenges arising. Furthermore, using the example of event-based estimation, the attendants will learn how these versatile approaches not only help to improve our modeling of the true uncertainty but also help to make use of the absence of information.
Florian Pfaff is a postdoctoral researcher at the Intelligent Sensor-Actuator-Systems Laboratory at the Karlsruhe Institute of Technology. He obtained his diploma in 2013 and his Ph.D. in 2018, both with the highest distinction. His research interests include a variety of estimation problems such as filtering on nonlinear manifolds, multitarget tracking, and estimation in the presence of both stochastic and non-stochastic uncertainties.
An Introduction to Non-linear State Estimation with Discrete Filters (Felix Govaers)
The increasing trend towards connected sensors (“internet of things” and “ubiquitous computing”) derive a demand for powerful non-linear estimation methodologies. Conventionally, algorithmic solutions in the field of Bayesian data fusion and target tracking are based on either a Gaussian (mixture) or a particle representation of the prior and posterior density functions
(pdf). The discrete filters reduce the state space to a fixed grid and represent the pdf in terms of an array of function values
in high to extraordinary high dimensions. Due to the “curse of dimensionality”, data compression techniques such as tensor
decompositions have to be applied.
In this tutorial, the basic methods for a Bayes formalism in discrete state spaces are explained. Possible solutions to the tensor decomposition (and composition) process are presented. Algorithms will be provided for each solution. The list of topics includes: Short introduction to target tracking and non-linear state estimation, discrete pdfs, Bayes recursion on those, PARAFAC/CANDECOMP Decomposition (CPD), Tucker and Hierarchical Tucker decomposition.
Felix Govaers received his Diploma in Mathematics and his PHD with the title “Advanced data fusion in distributed sensor applications” in Computer Science, both at the University of Bonn, Germany. Since 2009 he works at Fraunhofer FKIE in the department for Sensor Data Fusion and Information Processing where he was leading the research group “Distributed Systems” for three years. Since 2017 he is the deputy head of the department “Sensordata and Information Fusion”. The research of Felix Govaers is focused on data fusion for state estimation in sensor networks and non-linear filtering. This includes tensor decomposition based filtering, track-extraction, processing of delayed measurements as well as the Distributed Kalman filter and track-to-track fusion.
Fusion of Gyroscope and Accelerometer Data With Hands-On Part (Renaldas Urniezius)
In this course we will concentrate on high speed covariance-free approach where we will use our generic grey box model, based on quaternions, to infer instantaneous free fall vector orientation with respect to the vertical axis. The algorithm will use particles which are directly sampled from the most recent synchronous observations originating from both accelerometer and gyroscope sensors.
Renaldas Urniezius received a Ph.D. in Electronics Engineering from Kaunas University of Technology (KTU). He serves as Associate Professor in the Department of Automation, KTU and a senior member of IEEE, Robotics and Signal Processing societies. His research includes direct drives, robotics, and bio-engineering applications and university lectures deal with proactive optimal control to infer information faster. Scientific interests include the Pontryagin principle, sensor fusion and vision analysis applications, synthesis and research in foundations of inference and machine learning methods, variational programming, grey box variance-free approaches.
Multi-Sensor Fusion Meets Deep Learning: Challenges and Opportunities (Sen Wang and Yan Zhuang)
The workshop will be live from 14 UTC to 17 UTC!
You can join the workshop through this link: https://kit-lecture.zoom.us/j/97603698483?pwd=TlROcGhzbnlDSER1VzFTaVVwT3ZJdz09
Workshop website: http://pro.hw.ac.uk/mfi20/
Multi-sensor fusion technique has made great success and been the underpinning for many applications and systems in the last decades. However, with the rapid development of sensor technologies and recent data explosion of information systems, multi-sensor fusion community has been facing big challenges in making full use of diverse sources of information, efficiently modelling high-dimensional data, reasoning uncertainty from imperfect data, etc.
Recent advances in deep learning have revolutionized some fields, e.g., computer vision, and show potential capabilities to benefit multi-sensor fusion, though it is not clear how deep learning can naturally model uncertain and explicitly use prior
knowledge which are key for multi-sensor fusion. As sensor fusion and deep learning are becoming two main components in many robotic systems, industrial automation, cyber-physical systems, internet of things and so on, it is inevitable that they will interact with each other closer. However, as an emerging area, extensive novel research and practical applications are needed to explore the optimal integration of these two techniques and tackle the challenges.
The main objective of the workshop is to bring researchers of interest in these two areas to address key challenges in both domains, identifying limitations, ideally envisioning possible approaches to solve them, and opportunities, with the potential to advance both of techniques. The workshop’s call for paper will include, but not limited to, the following topics of interest:
- Novel deep learning structures for multi-sensor fusion;
- Multi-sensor fusion for robust deep learning inference;
- Multi-source based learning with domain-specific prior knowledge;
- Unsupervised learning or self-supervised learning for sensor fusion;
- Multi-sensor fusion for uncertainty estimation of deep learning;
- Fusion and learning integration to handle imperfect data;
- Better fusion efficiency and/or robustness with deep features;
- Applications to robotics, autonomous vehicles, cyber-physical systems, etc.
Dr. Sen Wang is an Assistant Professor at Edinburgh Centre for Robotics and Heriot-Watt University. His research focuses on robot perception using probabilistic and learning approaches. He was the Demo and Poster Chair of 2019 IEEE Smart World Congress, Organizing Committee member of 15th and 16th IEEE International Conference on Control and Automation, co-organizer of International Workshop on Internet of People, Assistive Robots and Things at ACM MobiSys’18, etc. He is also an Associate Editor of ICRA 2019, 2020 and IROS 2020.
Professor Yan Zhuang is a Professor in School of Control Science and Engineering at Dalian University of Technology, China, leading the Intelligent Robotics Lab. He was an organizing committee member of many international conferences, such as IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE International Conference on CYBER Technology in Automation, Control and Intelligent Systems, IEEE World Congress on Intelligent Control and Automation (WCICA). He is also an Associate Editor of IEEE Transactions on Instrumentation and Measurement, IET Cyber-Systems and Robotics, etc.