Program

Program Overview

The MFI goes virtual in 2020. Due to the uncertain situation surrounding the Coronavirus and restrictions and travel and meetings, the conference will be fully virtual. The program consists of an asynchronous part (video sessions) and a synchronous part (live Q&A sessions). The participants can watch videos of talks that match their interests at whatever time fits them best. At the conference times, (see below), the participants join the live sessions to listen to exciting plenary talks and start discussions with the authors of papers that are to their interest in the Q&A sessions.
Program at a Glance
Current Time
Time Zones Around the World

The conference takes place 02:00 pm (14:00) to 05:00 pm (17:00) UTC. This is equivalent to
07:00 am – 10:00 am Pacific Time (US West Coast)
10:00 am – 01:00 pm Eastern Time (US East Coast)
03:00 pm – 06:00 pm British Summer Time
04:00 pm – 07:00 pm Central European Time
05:00 pm – 08:00 pm Eastern European Time
10:00 pm – 01:00 am China Standard Time & Australian Western Time
11:00 pm – 02:00 am Korean and Japanese Time
11:30 pm – 02:30 am Australian Central Standard Time

Video Sessions

The paper sessions will be provided as pre-recorded videos. Guidelines for the preparation of the videos can be found under the following link:

Video Guidelines

Live Sessions

On each conference day, we will organize live session where attendees can discuss questions and comments about the papers and tutorials. Preliminary guidelines are provided under the following link:

Q&A Guidelines

Monday, September 14

Tutorial and workshop sessions provide participants with detailed overviews as well as in-depths insights to selected topics in the field of multisensor fusion and integration. They take place in three hour slots on Monday.
Tutorials present the state of the art about a frontier topic, enabling attendees to fully appreciate the current issues, main schools of thought and possible application areas. They can include hands-on laboratory sessions giving attendees instruction about the foundations in specific areas.
Workshops include a lead presentation introducing the audience to a specific subject and comprise up to five presentations that elaborate on diverse aspects of the session’s subject in detail.
The workshop and tutorial program can be found here.

Tuesday, September 15

The second conference starts with a plenary talk and a Q&A session. The day is closed with the awards ceremony to honor the authors of the best papers and the author of the best reviews.

Wednesday, September 16

The third conference day continues with a plenary talk, followed by regular and special sessions.

Topics

  • Real-time critical perception tasks in the context of automated driving
  • Directional estimation
  • Distributed sensor fusion in complex scenarios
  • Fusion of heterogeneous sensor information
  • Sensor systems
  • Autonomous systems design
  • Artificial Intelligence of autonomous systems
  • Risk analysis and verification
  • Advances in nonlinear estimation
  • Localization (indoor, outdoor)
  • Distributed sensing systems
  • Deep learning & data fusion
  • Human robot interaction

Sessions

Special Session 1 — Real-Time Critical Perception Tasks in the Context of Automated Driving

Chair: Richter, Sven (Karlsruhe Institute of Technology)|Co-Chair: Lauer, Martin (Karlsruhe Institute of Technology)|Q&A Session: Wednesday, 14:45 – 15:40 UTC

Zoom link: Only for registered participants

1 - Continuous Fusion of IMU and Pose Data Using Uniform B-Spline

Authors
    Hu, Haohao (Karlsruhe Institute of Technology)
    Beck, Johannes (Karlsruhe Insitute of Technology)
    Lauer, Martin (Karlsruhe Institute of Technology)
    Stiller, Christoph (Karlsruhe Institute of Technology)
Abstract

In this work, we present an uniform B-spline based continuous fusion approach, which fuses the motion data from an inertial measurement unit and the pose data from a visual localization system accurately, efficiently and continuously. Currently, in the domain of robotics and autonomous driving, most of the ego motion fusion approaches are filter based or pose graph based. By using the filter based approaches like the Kalman Filter or the Particle Filter, usually, many parameters should be set carefully, which is a big overhead. Besides that, the filter based approaches can only fuse data in a time forwards direction, which is a big disadvantage in processing async data. Since the pose graph based approaches only fuse the pose data, the inertial measurement unit data should be integrated to estimate the corresponding pose data firstly, which can however bring accumulated error into the fusion system. Additionally, the filter based approaches and the pose graph based approaches only provide discrete fusion results, which may decrease the accuracy of the data processing steps afterwards. Since the fusion approach is generally needed for robots and automated driving vehicles, it is a major goal to make it more accurate, robust, efficient and continuous. Therefore, in this work, we address this problem and apply the axis-angle rotation representation method, the Rodrigues’ formula and the uniform B-spline implementation to solve the ego motion fusion problem continuously. Evaluation results performed on the real world data show that our approach provides accurate, robust and continuous fusion results.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Semantic Evidential Grid Mapping Based on Stereo Vision

Authors
    Richter, Sven (Karlsruhe Institute of Technology)
    Beck, Johannes (Karlsruhe Insitute of Technology)
    Wirges, Sascha (Karlsruhe Institute of Technology (KIT))
    Stiller, Christoph (Karlsruhe Institute of Technology)
Abstract

Accurately estimating the current state of local traffic scenes is a crucial component of automated vehicles. The desired representation may include static and dynamic traffic participants, details on free space and drivability, but also information on the semantics. Multi-layer grid maps allow to include all these information in a common representation. In this work, we present an improved method to estimate a semantic evidential multi-layer grid map using depth from stereo vision paired with pixel-wise semantically annotated images. The error characteristics of the depth from stereo is explicitly modeled when transferring pixel labels from the image to the grid map space. We achieve accurate and dense mapping results by incorporating a disparity-based ground surface estimation in the inverse perspective mapping. The proposed method is validated on our experimental vehicle in challenging urban traffic scenarios.

Presentation

Only for registered participants

Material

Only for registered participants

3 - SemanticVoxels: Sequential Fusion for 3D Pedestrian Detection Using LiDAR Point Cloud and Semantic Segmentation

Authors
    Fei, Juncong (Karlsruhe Institute of Technology (KIT), Opel Automobile GmbH)
    Chen, Wenbo (University of Stuttgart)
    Heidenreich, Philipp (Opel Automobile GmbH)
    Wirges, Sascha (Karlsruhe Institute of Technology (KIT))
    Stiller, Christoph (Karlsruhe Institute of Technology)
Abstract

3D pedestrian detection is a challenging task in automated driving because pedestrians are relatively small, frequently occluded and easily confused with narrow vertical objects. LiDAR and camera are two commonly used sensor modalities for this task, which should provide complementary information. Unexpectedly, LiDAR-only detection methods tend to outperform multisensor fusion methods in public benchmarks. Recently, PointPainting has been presented to eliminate this performance drop by effectively fusing the output of a semantic segmentation network instead of the raw image information. In this paper, we propose a generalization of PointPainting to be able to apply fusion at different levels. After the semantic augmentation of the point cloud, we encode raw point data in pillars to get geometric features and semantic point data in voxels to get semantic features and fuse them in an effective way. Experimental results on the KITTI test set show that SemanticVoxels achieves state-of-the-art performance in both 3D and bird’s eye view pedestrian detection benchmarks. In particular, our approach demonstrates its strength in detecting challenging pedestrian cases and outperforms current state-of-the-art approaches.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Assymetric Noise Tailoring for Vehicle Lidar Data in Extended Object Tracking

Authors
    Kaulbersch, Hauke (Georg-August-Universität Göttingen)
    Honer, Jens (Valeo Schalter und Sensoren GmbH)
    Baum, Marcus (University of Göttingen)
Abstract

Extended target models often approximate complex structures of real-world objects. Yet, these structures can have a significant impact on the interpretation of the measurements. A prime example for such a scenario is a dimensional reduction, i.e. a target that generates three-dimensional measurements is estimated by a two-dimensional model. We present an approach that introduces asymmetric surface noise to the RHM. This allows for a different generation interpretation of measurements depending on their location relative to the target surface, and in turn provides a way to model extended targets that generate measurements primarily but not exclusively at the surface. The benefits of this model are demonstrated on automotive LIDAR data and a large-scale comparison to the literature approach is provided on the Nuscenes data set.

Presentation

Only for registered participants

Material

Only for registered participants

5 - LMB Filter Based Tracking Allowing for Multiple Hypotheses in Object Reference Point Association

Authors
    Herrmann, Martin (Ulm University)
    Piroli, Aldi (Universität Ulm)
    Strohbeck, Jan (Ulm University)
    Müller, Johannes (Ulm University)
    Buchholz, Michael (University of Ulm)
Abstract

Autonomous vehicles need precise knowledge on dynamic objects in their surroundings. Especially in urban areas with many objects and possible occlusions, an infrastructure system based on a multi-sensor setup can provide the required environment model for the vehicles. Previously, we have published a concept of object reference points (e.g. the corners of an object), which allows for generic sensor "plug and play" interfaces and relatively cheap sensors. This paper describes a novel method to additionally incorporate multiple hypotheses for fusing the measurements of the object reference points using an extension to the previously presented Labeled Multi-Bernoulli (LMB) filter. In contrast to the previous work, this approach improves the tracking quality in the cases where the correct association of the measurement and the object reference point is unknown. Furthermore, this paper identifies options based on physical models to sort out inconsistent and unfeasible associations at an early stage in order to keep the method computationally tractable for real-time applications. The method is evaluated on simulations as well as on real scenarios. In comparison to comparable methods, the proposed approach shows a considerable performance increase, especially the number of non-continuous tracks is decreased significantly.

Presentation

Only for registered participants

Material

Only for registered participants

6 - Pushing ROS towards the Dark Side: A ROS-Based Co-Simulation Architecture for Mixed-Reality Test Systems for Autonomous Vehicles

Authors
    Zofka, Marc René (FZI Research Center for Information Technology)
    Töttel, Lars (FZI Research Center for Information Technology)
    Zipfl, Maximilian (FZI Research Center for Information Technology)
    Heinrich, Marc (FZI Research Center for Information Technology)
    Fleck, Tobias (FZI Research Center for Information Technology)
    Schulz, Patrick (FZI Forschungszentrum Informatik)
    Zöllner, Johann Marius (FZI Forschungszentrum Informatik)
Abstract

Validation and verification of autonomous vehicles is still an unsolved problem. Although virtual approaches promise a cost efficient and reproducible solution, a most comprehensive and realistic representation of the real world traffic domain is required in order to make valuable statements about the performance of a highly automated driving (HAD) function. Models from different domain experts offer a repository of such representations. However, these models must be linked together for an extensive and uniform mapping of real world traffic domain for HAD performance assessment. Hereby, we propose the concept of a co-simulation architecture built upon the Robot Operating System (ROS) for both coupling and for integration of different domain expert models, immersion and stimulation of real pedestrians as well as AD systems into a common test system. This enables a unified way of generating ground truth for the performance assessment of multi-sensorial AD systems. We demonstrate the applicability of the ROS powered co-simulation by coupling behavior models in our mixed reality environment.

Presentation

Only for registered participants

Material

Only for registered participants

Special Session 2 — Directional Estimation

Chair: Pfaff, Florian (Karlsruhe Institute of Technology)|Co-Chair: Li, Kailai (Karlsruhe Institute of Technology)|Q&A Session: Tuesday, 14:45 – 15:40 UTC

Zoom link: Only for registered participants

1 - Array-Based Emitter Localization Using a VTOL UAV Carried Sensor

Authors
    Steffes, Christian (Fraunhofer Institute for Communication, Information Processing a)
    Allmann, Clemens (Fraunhofer FKIE)
    Oispuu, Marc (Fraunhofer FKIE)
Abstract

In this paper, the localization of a radio frequency emitter (RF) using bearing estimates is investigated. We study the position estimation using a single airborne observer platform moving along a preplanned trajectory. We present results from field trials using an emitter location system (ELS) installed on a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV). Raw array data batches have been gathered using a six-channel receiver and a fully polarized array antenna. A standard two-step localization approach based on angle of arrival (AOA) measurements and a Direct Position Determination (DPD) approach have been applied. In real-flight experiments, the performance of both methods has been investigated.

Presentation

Only for registered participants

Material

Only for registered participants

2 - A Unified Approach to the Orbital Tracking Problem

Authors
    Kent, John Telford (University of Leeds)
    Bhattacharjee, Shambo (University of Leeds)
    Faber, Weston (L3Harris Space Works)
    Hussein, Islam I (Thornton Tomasetti)
Abstract

Consider an object in orbit about the earth for which a sequence of angles-only measurements is made. This paper looks in detail at a one-step update for the filtering problem. Although the problem appears very nonlinear at first sight, it can be almost reduced to the standard linear Kalman filter by a careful formulation. The key features of this formulation are (1) the use of a local or adapted basis rather than a fixed basis for three-dimensional Euclidean space and the use of structural rather than ambient coordinates to represent the state, (2) the development of a novel “normal:conditional-normal” distribution to described the propagated position of the state, and (3) the development of a novel “Observation-Centered” Kalman filter to update the state distribution. A major advantage of this unified approach is that it gives a closed form filter which is highly accurate under a wide range of conditions, including high initial uncertainty, high eccentricity and long propagation times.

Presentation

Only for registered participants

Material

Only for registered participants

3 - The Interacting Multiple Model Filter on Boxplus-Manifolds

Authors
    Koller, Tom Lucas (University of Bremen)
    Frese, Udo (Universität Bremen)
Abstract

The interacting multiple model filter is the standard in state estimation where different dynamic models are required to model the behaviour of a system. It performs a probabilistic mixing of estimates. Up to now, it is undefined how to perform this mixing properly on manifold spaces, e.g. quaternions. We present the proper probabilistic mixing on differentiable manifolds based on the boxplus-method. The result is the interacting multiple model filter on boxplus-manifolds. We prove that our approach is a first order correct approximation of the optimum. The approach is evaluated in a simulation and performs as good as the ad-hoc solution for quaternions. A generic implementation of the boxplus interacting multiple model filter for differentiable manifolds is published alongside with this paper.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Sparse Magnetometer-Free Real-Time Inertial Hand Motion Tracking

Authors
    Grapentin, Aaron (TU Berlin)
    Lehmann, Dustin (TU Berlin)
    Zhupa, Ardjola (TU Berlin)
    Seel, Thomas (TU Berlin)
Abstract

Hand motion tracking is a key technology in several applications including ergonomic workplace assessment, human-machine interaction and neurological rehabilitation. Recent technological solutions are based on inertial measurement units (IMUs). They are less obtrusive than exoskeleton-based solutions and overcome the line-of-sight restrictions of optical systems. The number of sensors is crucial for usability, unobtrusiveness, and hardware cost. In this paper, we present a real-time capable, sparse motion tracking solution for hand motion tracking that requires only five IMUs, one on each of the distal finger segments and one on the back of the hand, in contrast to recently proposed full-setup solution with 16 IMUs. The method only uses gyroscope and accelerometer readings and avoids magnetometer readings, which enables unrestricted use in indoor environments, near ferromagnetic materials and electronic devices. We use a moving horizon estimation (MHE) approach that exploits kinematic constraints to track motions and performs long-term stable heading estimation. The proposed method is validated experimentally using a recently developed sensor system. It is found that the proposed method yields qualitatively good agreement of the estimated and the actual hand motion and that the estimates are long-term stable. The root-mean-square deviation between the fingertip position estimates of the sparse and the full setup are found to be in the range of 1 cm. The method is hence highly suitable for unobtrusive and non-restrictive motion tracking in a range of applications.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Estimating Correlated Angles Using the Hypertoroidal Grid Filter

Authors
    Pfaff, Florian (Karlsruhe Institute of Technology (KIT))
    Li, Kailai (Karlsruhe Institute of Technology (KIT))
    Hanebeck, Uwe D. (Karlsruhe Institute of Technology (KIT))
Abstract

Estimation for multiple correlated quantities generally requires considering a domain whose dimension is equal to the sum of the dimensions of the individual quantities. For multiple correlated angular quantities, considering a hypertoroidal manifold may be required. Based on a Cartesian product of d equidistant one-dimensional grids for the unit circle, a grid for the d-dimensional hypertorus can be constructed. This grid is used for a novel filter. For n grid points, the update step is in O(n) for arbitrary likelihoods and the prediction step is in O(n2) for arbitrary transition densities. The run time of the latter can be reduced to O(nlogn) for identity models with additive noise. In an evaluation scenario, the novel filter shows faster convergence than a particle filter for hypertoroidal domains and is on par with the recently proposed Fourier filters.

Presentation

Only for registered participants

Material

Only for registered participants

6 - Nonlinear Von Mises-Fisher Filtering Based on Isotropic Deterministic Sampling

Authors
    Li, Kailai (Karlsruhe Institute of Technology (KIT))
    Pfaff, Florian (Karlsruhe Institute of Technology (KIT))
    Hanebeck, Uwe D. (Karlsruhe Institute of Technology (KIT))
Abstract

We present a novel deterministic sampling approach for von Mises-Fisher distributions of arbitrary dimensions. Following the idea of the unscented transform, samples of configurable size are drawn isotropically on the hypersphere while preserving the mean resultant vector of the underlying distribution. Based on these samples, a von Mises-Fisher filter is proposed for nonlinear estimation of hyperspherical states. Compared with existing von Mises-Fisher-based filtering schemes, the proposed filter exhibits superior hyperspherical tracking performance.

Presentation

Only for registered participants

Material

Only for registered participants

7 - Motion Estimation for Tethered Airfoils with Tether Sag

Authors
    Freter, Jan Hendrik (TU Berlin)
    Seel, Thomas (TU Berlin)
    Elfert, Christoph (TU Berlin)
    Göhlich, Dietmar (TU Berlin)
Abstract

In this contribution a motion estimation approach for the autonomous flight of tethered airfoils is presented. Accurate motion data are essential for the airborne wind energy sector to optimize the harvested wind energy and for the manufacturer of tethered airfoils to optimize the kite design based on measurement data. We propose an estimation based on tether angle measurements from the ground unit and inertial sensor data from the airfoil. In contrast to existing approaches, we account for the issue of tether sag, which renders tether angle measurements temporarily inaccurate. We formulate a Kalman Filter which adaptively shifts the fusion weight to the measurement with the higher certainty. The proposed estimation method is evaluated in simulations, and a proof of concept is given on experimental data, for which the proposed method yields a three times smaller estimation error than a fixed-weight solution.

Presentation

Only for registered participants

Material

Only for registered participants

8 - Field Experiments on Shooter State Estimation Accuracy Based on Incomplete Acoustic Measurements

Authors
    Still, Luisa (Fraunhofer FKIE)
    Oispuu, Marc (Fraunhofer FKIE)
Abstract

This paper investigates the problem of shooter localization fusing complete or incomplete experimental data of one or multiple acoustic sensors. A microphone array can measure a complete measurement data set, composed of two bearing angles of the two impulsive sound events of a supersonic bullet and the TDOA between both events, or an incomplete subset. In this paper experimental results from a field experiment with volumetric microphone arrays are investigated and compared with the associated Cramér-Rao bound.

Presentation

Only for registered participants

Material

Only for registered participants

Tu-A1 — Autonomous Systems

Chair: Williams, Jason (CSIRO)|Co-Chair: Clarke, Daniel Stephen (Cranfield University)|Q&A Session: Tuesday, 14:45 – 15:10 UTC

Zoom link: Only for registered participants

1 - Prototyping Autonomous Robotic Networks on Different Layers of RAMI 4.0 with Digital Twins

Authors
    Barbie, Alexander (Alfred-Wegener-Institute Helmholtz-Centrum for Polar and Marine)
    Hasselbring, Wilhelm (Christian Albrechts University)
    Pech, Niklas (GEOMAR Helmholtz Centre for Ocean Research Kiel)
    Sommer, Stefan (GEOMAR Helmholtz Centre for Ocean Research Kiel)
    Flögel, Sascha (GEOMAR Helmholtz Centre for Ocean Research Kiel)
    Wenzhöfer, Frank (Alfred Wegener Institute, Helmholtz Centre for Polar and Marine)
Abstract

In this decade, the amount of (industrial) Internet of Things devices will increase tremendously. Today, there exist no common standards for interconnection, observation, or the monitoring of these devices. In context of the German “Industrie 4.0” strategy the Reference Architectural Model Industry 4.0 (RAMI 4.0) was introduced to connect different aspects of this rapid development. The idea is to let different stakeholders of these products speak and understand the same terminology. In this paper, we present an approach using Digital Twins to prototype different layers along the axis of the RAMI 4.0, by the example of an autonomous ocean observation system developed in the project ARCHES.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Large-Scale UAS Traffic Management (UTM) Structure

Authors
    Sacharny, David (University of Utah)
    Henderson, Thomas C. (University of Utah)
    Cline, Michael (University of Utah)
Abstract

The advent of large-scale UAS exploitation for urban tasks, such as delivery, has led to a great deal of research and development in the UAS Traffic Management (UTM) domain. The general approach at this time is to define a grid network for the area of operation, and then have UAS Service Suppliers (USS) pairwise deconflict any overlapping grid elements for their flights. Moreover, this analysis is performed on arbitrary flight paths through the airspace, and thus may impose a substantial computational burden in order to ensure strategic deconfliction (that is, no two flights are ever closer than the minimum required separation). However, the biggest drawback to this approach is the impact of contingencies on UTM operations. For example, if one UAS slows down, or goes off course, then strategic deconfliction is no longer guaranteed, and this can have a disastrous snowballing effect on a large number of flights. We propose a lane-based approach which not only allows a one-dimensional strategic deconfliction method, but provides structural support for alternative contingency handling methods with minimal impact on the overall UTM system. Methods for lane creation, path assignment through lanes, flight strategic deconfliction, and contingency handling are provided here.

Presentation

Only for registered participants

Material

Only for registered participants

3 - FAA-NASA vs. Lane-Based Strategic Deconfliction

Authors
    Sacharny, David (University of Utah)
    Henderson, Thomas C. (University of Utah)
    Russon, Benjamin (University of Utah)
    Cline, Michael (University of Utah)
    Guo, Ejay (University of Utah)
Abstract

The Federal Aviation Administration (FAA) and NASA have provided guidelines for Unmanned Aircraft Systems (UAS) to ensure adequate safety separation of aircraft, and in terms of UAS Traffic Management (UTM) have stated (Rios, 2018): "A UTM Operation should be free of 4-D intersection with all other known UTM Operations prior to departure and this should be known as Strategic Deconfliction within UTM … A UTM Operator must have a facility to negotiate deconfliction of operations with other UTM Operators … There needs to be a capability to allow for intersecting operations." The latter statement means that UTM Operators must be able to fly safely in the same geographic area. The current FAA-NASA approach to strategic deconfliction is to provide a set of geographic grid elements, and then have every new flight pairwise deconflict with UTM Operators with flights in the same grid elements. Note that this imposes a high computational burden in resolving these 4D flight paths, and has side effects in terms of limiting access to the airspace (e.g., if a new flight is deconflicted and added to the common grid elements during this analysis, then the new flight must start all over). We have proposed a lane-based approach to large-scale UAS traffic management (Sacharny, 2019,Henderson 2019) which uses one-way lanes, and roundabouts at lane intersections to allow a much more efficient analysis and guarantee of separation safety. We present here the results of an in-depth comparison of FAA-NASA strategic deconfliction (FNSD) and Lane-based strategic deconfliction (LSD) and demonstrate that FNSD suffers from several types of complexity which is generally absent from the lane-based method.

Presentation

Only for registered participants

Material

Only for registered participants

4 - From Level Four to Five: Getting Rid of the Safety Driver with Diagnostics in Autonomous Driving

Authors
    Orf, Stefan (FZI Research Center for Information Technology)
    Zofka, Marc René (FZI Research Center for Information Technology)
    Zöllner, Johann Marius (FZI Forschungszentrum Informatik)
Abstract

During the past years autonomous driving evolved from only being a major topic in scientific research, all the way to practical and commercial applications like on-demand public transportation. Together with this evolution new use cases arose, making reliability and robustness of the complete system more important than ever. Many different stakeholders during development and operation as well as independent certification and admission authorities pose additional challenges. By providing and capturing additional information about the running system, independent of the main driving task (e.g. by components self tests or performance observations) the overall robustness, reliability and safety of the vehicle is increased. This article captures the issues of autonomous driving in modern-day real-life use cases and defines what a diagnostic system needs to look like to tackel these challenges. Furthermore the authors provide a concept for diagnostics in the heterogenous software landscape of component based autonomous driving architectures regarding their special complexities and difficulties.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Mathematical Modeling and Optimal Inference of Guided Markov-Like Trajectory

Authors
    Rezaie, Reza (University of New Orleans)
    Li, X. Rong (University of New Orleans)
Abstract

A trajectory of a destination-directed moving object (e.g. an aircraft from an origin airport to a destination airport) has three main components: an origin, a destination, and motion in between. We call such a trajectory that end up at the destination textit{destination-directed trajectory (DDT)}. A class of conditionally Markov (CM) sequences (called CM$_text{L}$) has the following main components: a joint density of two endpoints and a Markov-like evolution law. A CM$_text{L}$ dynamic model can describe the evolution of a DDT but not of a guided object chasing a moving guide. The trajectory of a guided object is called a textit{guided trajectory (GT)}. Inspired by a CM$_text{L}$ model, this paper proposes a model for a GT with a moving guide. The proposed model reduces to a CM$_text{L}$ model if the guide is not moving. We also study filtering and trajectory prediction based on the proposed model. Simulation results are presented.

Presentation

Only for registered participants

Material

Only for registered participants

6 - Probabilistic Programming Languages for Modeling Autonomous Systems

Authors
    Shamsi, Seyed Mahdi (SUNY at Buffalo)
    Farina, Gian Pietro (University at Buffalo)
    Gaboardi, Marco (University at Buffalo)
    Napp, Nils (SUNY Buffalo)
Abstract

We present a robotic development framework called ROSPPL, which can accomplish many of the essential probabilistic tasks that comprise modern autonomous systems and is based on a general purpose probabilistic programming language (PPL). Benefiting from ROS integration, a short PPL program in our framework is capable of controlling a robotic system, estimating its current state online, as well as automatically calibrating parameters and detecting errors, simply through probabilistic model and policy specification. The advantage of our approach lies in its generality which makes it useful for quickly designing and prototyping of new robots. By directly modeling the interconnection of random variables, decoupled from the inference engine, our design benefits from robustness, re-usability, upgradability, and ease of specification. In this paper, we use a SDV as an example of a complex autonomous system, to show how different sub-components of such system could be implemented using a probabilistic programming language, in a way that the system is capable of reasoning about itself. Our set of use-cases include localization, mapping, fault detection, calibration, and planning.

Presentation

Only for registered participants

Material

Only for registered participants

Tu-A2 — Information Fusion 1

Chair: Dunik, Jindrich (University of West Bohemia)|Co-Chair: Govaers, Felix (Universität Bonn, Fraunhofer FKIE)|Q&A Session: Tuesday, 15:15 – 15:40 UTC

Zoom link: Only for registered participants

1 - A Gamma Filter for Positive Parameter Estimation

Authors
    Govaers, Felix (Universität Bonn, Fraunhofer FKIE)
    Alqaderi, Hosam (Ibeo Automotive Systems)
Abstract

In many data fusion applications, the param- eter of interest only takes positive values. For example, it might be the goal to estimate a distance or to count instances of certain items. Optimal data fusion then should model the system state as a positive random variable, which has a probability density function that is restricted to the positive real axis. However, classical approaches based on normal densities fail here, in particular whenever the variance of the likelihood is rather large compared to the mean. In this paper, it is considered to model such random parameters with a Gamma distribution, since its support is positive and it is the maximum entropy distribution for such variables. It is shown, that a closed form solution of the Bayes update can be achieved. An example within the framework of an autonomous simulation and further numerical considerations demonstrate the feasibility of the approach.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Heterogeneous Decentralized Fusion Using Conditionally Factorized Channel Filters

Authors
    Dagan, Ofer (University of Colorado Boulder)
    Ahmed, Nisar (University of Colorado Boulder)
Abstract

This paper studies a family of heterogeneous Bayesian decentralized data fusion problems. Heterogeneous fusion considers the set of problems in which either the communicated or the estimated distributions describe different, but overlapping, states of interest which are subsets of a larger full global joint state. On the other hand, in homogeneous decentralized fusion, each agent is required to process and communicate the full global joint distribution. This might lead to high computation and communication costs irrespective of relevancy to an agent’s particular mission, for example, in autonomous multi-platform multi-target tracking problems, since the number of states scales with the number of targets and agent platforms, not with each agent’s specific local mission. In this paper, we exploit the conditional independence structure of such problems and provide a rigorous derivation for a family of exact and approximate, heterogeneous, conditionally factorized channel filter methods. Numerical examples show more than 95% potential communication reduction for heterogeneous channel filter fusion, and a multi-target tracking simulation shows that these methods provide consistent estimates.

Presentation

Only for registered participants

Material

Only for registered participants

3 - Weighted Information Filtering, Smoothing, and Out-Of-Sequence Measurement Processing

Authors
    Shulamy, Yaron (Rafael – Advanced Defense Systems)
    Sigalov, Daniel (Rafael – Advanced Defense Systems)
Abstract

We consider the problem of state estimation in dynamical systems and propose a different mechanism for handling unmodeled system uncertainties. Instead of injecting random process noise, we assign different weights to different measurements such that more recent measurements are assigned more weight. A specific choice of exponentially decaying weight function results in an algorithm with essentially the same recursive structure as the Kalman filter. It differs, however, in the manner in which old and new data are combined. While in the classical KF, the uncertainty associated with the previous estimate is inflated in an additive manner, in the present case, the uncertainty inflation is done by multiplying the previous covariance matrix by an exponential factor. This difference allows us to solve a larger variety of problems using essentially the same algorithm. We thus propose a unified method for filtering, prediction, smoothing and general out-of-sequence updates, all of which require different Kalman-like algorithms.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Combination of Maximum Correntropy Criterion & α-Rényi Divergence for a Robust and Fail-Safe Multi-Sensor Data Fusion

Authors
    Makkawi, Khoder (Univ. Lille, CNRS, UMR 9189 – CRIStAL – Centre de Recherche en I)
    Ait-Tmazirte, Nourdine (Institut de Recherche Technologique Railenium)
    El Badaoui El Najjar, Maan (Univ. Lille, CNRS, UMR 9189 – CRIStAL – Centre de Recherche en I)
    Moubayed, Nazih (Lebanese University, CRSI LaRGES)
Abstract

A combination of a robust optimality criterion, the Maximum Correntropy Criterion (MCC), and a powerful Fault Detection and Exclusion (FDE) strategy for a robust and fault-tolerant multi-sensor fusion approach is presented in this paper taking advantage of the information theory. The used estimator is called the MCCNIF, which is in the Nonlinear Information Filter (NIF) under the MCC. The NIF deals well with Gaussian noises but, its performance decreases when abruptly facing heavy non-Gaussian noises causing a divergence. Conversely, the NIF deals fairly with nonlinearity problems. Hence, to deal with non-Gaussian noises, the MCC shows good performance especially with shot noises and Gaussian mixture noises. To detect and exclude the erroneous measurements, an FDE layer, based on α-Rényi Divergence (α-RD) between the a priori and a posteriori probability distributions, is created. Then an adaptive threshold is calculated as a decision support based on the α-Rényi criterion (α-Rc). In order to test in real conditions the proposed framework, an autonomous vehicle multi-sensor localization example is taken. Indeed, for this application, in stringent environments (such as urban canyon, building, forests ), it is necessary to ensure both integrity and accuracy. The proposed solution is to combine the Global Navigation Satellite System (GNSS) data with the odometer (odo) data by a tight integration. The main contributions of this paper are the design and development of unique framework integrating a robust filter the MCCNIF and an FDE method using residual based on α-RD with an adaptive threshold. Real experimental data are presented and encourages the validation of the proposed approach.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Conservative Quantization of Fast Covariance Intersection

Authors
    Funk, Christopher (Karlsruhe Institute of Technology)
    Noack, Benjamin (Karlsruhe Institute of Technology (KIT))
    Hanebeck, Uwe D. (Karlsruhe Institute of Technology (KIT))
Abstract

Sensor data fusion in wireless sensor networks poses challenges with respect to both theory and implementation. Unknown cross-correlations between estimates distributed across the network need to be addressed carefully as neglecting them leads to overconfident fusion results. In addition, limited processing power and energy supply of the sensor nodes prohibit the use of complex algorithms and high-bandwidth communication. In this work, fast covariance intersection using both quantized estimates and quantized covariance matrices is considered. The proposed method is computationally efficient and significantly reduces the bandwidth required for data transmission while retaining unbiasedness and conservativeness of fast covariance intersection. The performance of the proposed method is evaluated with respect to that of fast covariance intersection, which proves its effectiveness even in the case of substantial data reduction.

Presentation

Only for registered participants

Material

Only for registered participants

6 - Efficient Deterministic Conditional Sampling of Multivariate Gaussian Densities

Authors
    Frisch, Daniel (Karlsruhe Institute of Technology (KIT))
    Hanebeck, Uwe D. (Karlsruhe Institute of Technology (KIT))
Abstract

We propose a fast method for deterministic multivariate Gaussian sampling. In many application scenarios, the commonly used stochastic Gaussian sampling could simply be replaced by our method – yielding comparable results with a much smaller number of samples. Conformity between the reference Gaussian density function and the distribution of samples is established by minimizing a distance measure between Gaussian density and Dirac mixture density. A modified Cramér-von Mises distance of the Localized Cumulative Distributions (LCDs) of the two densities is employed that allows a direct comparison between continuous and discrete densities in higher dimensions. Because numerical minimization of this distance measure is not feasible under real time constraints, we propose to build a library that maintains sample locations from the standard normal distribution as a template for each number of samples in each dimension. During run time, the requested sample set is re-scaled according to the eigenvalues of the covariance matrix, rotated according to the eigenvectors, and translated according to the mean vector, thus adequately representing arbitrary multivariate normal distributions.

Presentation

Only for registered participants

Material

Only for registered participants

We-A1 — Information Fusion 2

Chair: Straka, Ondrej (University of West Bohemia)|Co-Chair: Frisch, Daniel (Karlsruhe Institute of Technology)|Q&A Session: Wednesday, 14:45 – 15:10 UTC

Zoom link: Only for registered participants

1 - Hypermap Mapping Framework and its Application to Autonomous Semantic Exploration

Authors
    Zaenker, Tobias (University of Bonn)
    Verdoja, Francesco (Aalto University)
    Kyrki, Ville (Aalto University)
Abstract

Modern intelligent and autonomous robotic applications often require robots to have more information about their environment than that provided by traditional occupancy grid maps. For example, a robot tasked to perform autonomous semantic exploration has to label objects in the environment it is traversing while autonomously navigating. To solve this task the robot needs to at least maintain an occupancy map of the environment for navigation, an exploration map keeping track of which areas have already been visited, and a semantic map where locations and labels of objects in the environment are recorded. As the number of maps required grows, an application has to know and handle different map representations, which can be a burden. We present the Hypermap framework, which can manage multiple maps of different types. In this work, we explore the capabilities of the framework to handle occupancy grid layers and semantic polygonal layers, but the framework can be extended with new layer types in the future. Additionally, we present an algorithm to automatically generate semantic layers from RGB-D images. We demonstrate the utility of the framework using the example of autonomous exploration for semantic mapping.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Temporal Smoothing for Joint Probabilistic People Detection in a Depth Sensor Network

Authors
    Wetzel, Johannes (Intelligent Systems Research Group (ISRG), Karlsruhe University)
    Laubenheimer, Astrid (Intelligent Systems Research Group (ISRG), Karlsruhe University)
    Heizmann, Michael (Karlsruhe Institute of Technology KIT)
Abstract

Wide-area indoor people detection in a network of depth sensors is the basis for many applications, e.g. people counting or customer behavior analysis. Existing probabilistic methods use approximative stochastic inference to estimate the marginal probability distribution of people present in the scene for a single time step. In this work we investigate how the temporal context, given by a time series of multi-view depth observations, can be exploited to regularize a mean-field variational inference optimization process. We present a probabilistic grid based dynamic model and deduce the corresponding mean-field update regulations to effectively approximate the joint probability distribution of people present in the scene across space and time. Our experiments show that the proposed temporal regularization leads to a more robust estimation of the desired probability distribution and increases the detection performance.

Presentation

Only for registered participants

Material

Only for registered participants

3 - Evaluation of Confidence Sets for Estimation with Piecewise Linear Constraint

Authors
    Ajgl, JiřÍ (University of West Bohemia)
    Straka, Ondrej (University of West Bohemia)
Abstract

Equality constrained estimation finds its application in problems like positioning of cars on roads. This paper compares two constructions of confidence sets. The first one is given by the intersection of a standard unconstrained confidence set and the constraint, the second one applies the constraint first and designs the confidence set later. Analytical results are presented for a linear constraint. A family of piecewise linear constraints is inspected numerically. It is shown that for the considered scenarios, the second construction with a properly tuned free parameter provides confidence sets that are smaller in the expectation.

Presentation

Only for registered participants

Material

Only for registered participants

4 - An EKF Based Approach to Radar Inertial Odometry

Authors
    Doer, Christopher (Karlsruhe Institute of Technology)
    Trommer, Gert (Karlsruhe Institute of Technology)
Abstract

Accurate localization is key for autonomous robotics. Navigation in GNSS-denied and degraded visual environment is still very challenging. Approaches based on visual sensors usually fail in conditions like darkness, direct sunlight, fog or smoke. Our approach is based on a millimeter wave FMCW radar sensor and an Inertial Measurement Unit (IMU) as both sensors can operate in these conditions. Specifically, we propose an Extended Kalman Filter (EKF) based solution to 3D Radar Inertial Odometry (RIO). A standard automotive FMCW radar which measures the 3D position and Doppler velocity of each detected target is used. Based on the radar measurements, a RANSAC 3D ego velocity estimation is carried out. Fusion with inertial data further improves the accuracy, robustness and provides a high rate motion estimate. An extension with barometric height fusion is presented. The radar based ego velocity estimation is tested in simulation and the accuracy evaluated with real world datasets in a motion capture system. Tests in indoor and outdoor environments with trajectories longer than 200m achieved a final position error below 0.6% of the distance traveled. The proposed odometry approach runs faster than realtime even on an embedded computer.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Acoustic Echo-Localization for Pipe Inspection Robots

Authors
    Worley, Rob (University of Sheffield)
    Yu, Yicheng (University of Sheffield)
    Anderson, Sean (University of Sheffield)
Abstract

Robot localization in water and wastewater pipes is essential for path planning and for localization of faults, but the environment makes it challenging. Conventional localization suffers in pipes due to the lack of features and due to accumulating uncertainty caused by the limited perspective of typical sensors. This paper presents the implementation of an acoustic echo based localization method for the pipe environment, using a loudspeaker and microphone positioned on the robot. Echoes are used to detect distant features in the pipe and make direct measurements of the robot’s position which do not suffer from accumulated error. Novel estimation of echo class is used to refine the acoustic measurements before they are incorporated into the localization. Finally, the paper presents an investigation into the effectiveness of the method and the robustness of the method to errors in the acoustic measurements.

Presentation

Only for registered participants

Material

Only for registered participants

6 - AirMuseum : a Heterogeneous Multi-Robot Dataset for Stereo-Visual and Inertial Simultaneous Localization and Mapping

Authors
    Dubois, Rodolphe (ONERA)
    Eudes, Alexandre (ONERA)
    Fremont, Vincent (Ecole Centrale de Nantes, CNRS, LS2N, UMR 6004)
Abstract

This paper introduces a new dataset dedicated to multi-robot stereo-visual and inertial Simultaneous Localization And Mapping (SLAM). This dataset consists in five indoor multi-robot scenarios acquired with ground and aerial robots in a former Air Museum at ONERA Meudon, France. Those scenarios were designed to exhibit some specific opportunities and challenges associated to collaborative SLAM. Each scenario includes synchronized sequences between multiple robots with stereo images and inertial measurements. They include explicit direct interactions between robots through the detection of mounted AprilTag markers. Ground-truth trajectories for each robot were computed using Structure-from-Motion algorithms and constrained with the detection of fixed AprilTag markers placed as beacons on the experimental area. Those scenarios have been benchmarked on state-of-the-art monocular, stereo-visual and visual-inertial SLAM algorithms to provide a baseline of the single-robot performances to be enhanced in collaborative frameworks.

Presentation

Only for registered participants

Material

Only for registered participants

We-A2 — Machine Learning and Artificial Intelligence

Chair: Huber, Marco F. (University of Stuttgart)|Co-Chair: Gilitschenski, Igor (Massachusetts Institute of Technology)|Q&A Session: Wednesday, 15:15 – 15:40 UTC

Zoom link: Only for registered participants

1 - Detecting Floods Caused by Tropical Cyclone Using CYGNSS Data

Authors
    Ghasemigoudarzi, Pedram (Memorial University)
    Huang, Weimin (Memorial University)
    De Silva, Oscar (Memorial University of Newfoundland)
Abstract

As a tropical cyclone reaches inland, it causes severe flash floods. Real-time flood remote sensing can reduce the resultant damages of a flash flood due to its heavy precipitation. Considering the high temporal resolution and large constellation of the Cyclone Global Navigation Satellite System (CYGNSS), it has the potential to detect and monitor flash floods. In this study, based on CYGNSS data and the Random Under-Sampling Boosted (RUSBoost) machine learning algorithm, a flood detection method is proposed. The proposed technique is applied to the areas affected by Hurricane Harvey and Hurricane Irma, for which test results indicate that the flooded points are detected with 89.00% and 85.00% accuracies, respectively, and non-flooded land points are classified with accuracies equal to 97.20% and 71.00%, respectively.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Batch-Wise Regularization of Deep Neural Networks for Interpretability

Authors
    Burkart, Nadia (Fraunhofer IOSB)
    Faller, Philipp M. (Fraunhofer Center of Machine Learning, Fraunhofer IOSB)
    Huber, Marco F. (University of Stuttgart)
    Peinsipp, Elisabeth (Fraunhofer IOSB)
Abstract

Fast progress in the field of machine learning (ML) and deep learning (DL) strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term and thereby allows only one training of a surrogate model compared to other regularization approaches. The experiments showed that the proposed approach had a high fidelity to the main model and also allowed interpretable and compared to some of the baselines more accurate models.

Presentation

Only for registered participants

Material

Only for registered participants

3 - A Hybrid Approach to Hierarchical Density-Based Cluster Selection

Authors
    Malzer, Claudia (HAWK Hochschule für angewandte Wissenschaft und Kunst)
    Baum, Marcus (University of Göttingen)
Abstract

HDBSCAN is a density-based clustering algorithm that constructs a cluster hierarchy tree and then uses a specific stability measure to extract flat clusters from the tree. We show how the application of an additional threshold value can result in a combination of DBSCAN* and HDBSCAN clusters, and demonstrate potential benefits of this hybrid approach when clustering data of variable densities. In particular, our approach is useful in scenarios where we require a low minimum cluster size but want to avoid an abundance of micro-clusters in high-density regions. The method can directly be applied to HDBSCAN’s tree of cluster candidates and does not require any modifications to the hierarchy itself. It can easily be integrated as an addition to existing HDBSCAN implementations.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Estimating Uncertainties of Recurrent Neural Networks in Application to Multitarget Tracking

Authors
    Pollithy, Daniel (Karlsruhe Institute of Technology)
    Reith-Braun, Marcel (Karlsruhe Institute of Technology (KIT))
    Pfaff, Florian (Karlsruhe Institute of Technology (KIT))
    Hanebeck, Uwe D. (Karlsruhe Institute of Technology (KIT))
Abstract

In multitarget tracking, finding an association between the new measurements and the known targets is a crucial challenge. By considering both the uncertainties of all the predictions and measurements, the most likely association can be determined. While Kalman filters inherently provide the predicted uncertainties, they require a predefined model. In contrast, neural networks offer data-driven possibilities, but provide only deterministic predictions. We therefore compare two common approaches for uncertainty estimation in neural networks applied to LSTMs using our multitarget tracking benchmark for optical belt sorting. As a result, we show that the estimation of measurement uncertainties improves the tracking results of LSTMs, posing them as a viable alternative to manual motion modeling.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Machine Assisted Video Tagging of Elderly Activities in K-Log Centre

Authors
    Lee, Chanwoong (Korea Institute of Science and Technology (KIST))
    Choi, Hyorim (Korea Institute of Science and Technology (KIST))
    Muralidharan, Shapna (Korea Institute of Science and Technology (KIST))
    Ko, Heedong (Korea Institute of Science and Technology (KIST))
    Yoo, Byounghyun (Korea Institute of Science and Technology)
    Kim, Gerard J. (Korea University)
Abstract

In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLOv3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 91.5% in live data, reducing the logging activities of the CG’s.

Presentation

Only for registered participants

Material

Only for registered participants

6 - Automatic Discovery of Motion Patterns That Improve Learning Rate in Communication-Limited Multi-Robot Systems

Authors
    Choi, Taeyeong (Arizona State University)
    Pavlic, Theodore (Arizona State University)
Abstract

Learning in robotic systems is largely constrained by the quality of the training data available to a robot learner. Robots may have to make multiple, repeated expensive excursions to gather this data or have humans in the loop to perform demonstrations to ensure reliable performance. The cost can be much higher when a robot embedded within a multi-robot system must learn from the complex aggregate of the many robots that surround it and may react to the learner’s motions. In our previous work [1], [2], we considered the problem of Remote Teammate Localization (ReTLo), where a single robot in a team uses passive observations of a nearby neighbor to accurately infer the position of robots outside of its sensory range even when robot-to-robot communication is not allowed in the system. We demonstrated a communication-free approach to show that the rearmost robot can use motion information of a single robot within its sensory range to predict the positions of all robots in the convoy. Here, we expand on that work with Selective Random Sampling (SRS), a framework that improves the ReTLo learning process by enabling the learner to actively deviate from its trajectory in ways that are likely to lead to better training samples and consequently gain accurate localization ability with fewer observations. By adding diversity to the learner’s motion, SRS simultaneously improves the learner’s predictions of all other teammates and thus can achieve similar performance as prior methods with less data.

Presentation

Only for registered participants

Material

Only for registered participants

We-A3 — Multisensor Data Fusion and Calibration

Chair: Strand, Marcus (Baden-Wuerttemberg Cooperative State University Karlsruhe)|Co-Chair: Henderson, Thomas C. (University of Utah)|Q&A Session: Wednesday, 15:45 – 16:10 UTC

Zoom link: Only for registered participants

1 - An Application of IMM Based Sensor Fusion Algorithm in Train Positioning System

Authors
    Kara, Süleyman Fatih (Aselsan Inc.)
    Basaran, Burak (ASELSAN Inc.)
Abstract

With their serious impact on the safe and economic operation of railway domains, train positioning systems play a crucial part in railway signalling. In this paper, we present a solution for such a train positioning system by making use of a tachometer, a Doppler radar and a magnetic positioning sensor (a.k.a tag). An IMM (Interacting Multiple Model) filter based sensor fusion algorithm has been used to calculate the velocity and position of the train using the above sensors. The algorithm has been developed with SCADE (Safety Critical Application Development Environment) which is a tool frequently used for development in safety-critical systems because it drastically simplifies and accelerates the certification process required of EN 50128.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Unsupervised Optimization Approach to in Situ Calibration of Collaborative Human-Robot Interaction Tools

Authors
    Maric, Bruno (Univeristy of Zagreb, Faculty of Electrical Engineering and Comp)
    Polic, Marsela (University of Zagreb)
    Tabak, Tomislav (University of Zagreb)
    Orsag, Matko (University of Zagreb, Faculty of Electrical Engineering and Comp)
Abstract

In this work we are proposing an intuitive tool based on motion capture system for programming by demonstration tasks in robot manipulation. For a robot manipulator set in a working environment equipped with any external measurement system, we propose an online calibration method based on unsupervised learning and simplex optimization. Without loos of generality the Nelder-Mead simplex method is used to calibrate the rigid transforms of the robot tools and environment based on motion capture system recordings. Fast optimization procedure is enabled through dataset subsampling using iterative clustering and outlier detection procedure. The online calibration enables customization and execution of programming by demonstration tasks in real time.

Presentation

Only for registered participants

Material

Only for registered participants

3 - Online 3D Frontier-Based UGV and UAV Exploration Using Direct Point Cloud Visibility

Authors
    Williams, Jason (CSIRO)
    Jiang, Shu (Georgia Institute of Technology)
    O’Brien, Matthew (Georgia Institute of Technology)
    Wagner, Glenn (Emesent)
    Hernandez, Emili (Emesent)
    Cox, Mark (CSIRO)
    Pitt, Alex (CSIRO)
    Arkin, Ronald (Georgia Tech)
    Hudson, Nicolas (X, The Moonshot Factory)
Abstract

While robots have long been proposed as a tool to reduce human personnel’s exposure to danger in subterranean environments, these environments also present significant challenges to the development of these robots. Fundamental to this challenge is the problem of autonomous exploration. Frontier-based methods have been a powerful and successful approach to exploration, but complex 3D environments remain a challenge when online employment is required. This paper presents a new approach that addresses the complexity of operating in 3D by directly modelling the boundary between observed free and unobserved space (the frontier), rather than utilising dense 3D volumetric representations. By avoiding a representation involving a single map, it also achieves scalability to problems where Simultaneous Localisation and Matching (SLAM) loop closures are essential. The approach enabled a team of seven ground and air robots to autonomously explore the DARPA Subterranean Challenge Urban Circuit, jointly traversing over 8 km in a complex and communication denied environment.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Certifiably Optimal Monocular Hand-Eye Calibration

Authors
    Wise, Emmett (University of Toronto)
    Giamou, Matthew (University of Toronto)
    Khoubyarian, Soroush (University of Toronto)
    Grover, Abhinav (University of Toronto)
    Kelly, Jonathan (University of Toronto)
Abstract

Correct fusion of data from two sensors requires an accurate estimate of their relative pose, which can be determined through the process of extrinsic calibration. When the sensors are capable of producing their own egomotion estimates (i.e., measurements of their trajectories through an environment), the `hand-eye’ formulation of extrinsic calibration can be employed. In this paper, we extend our recent work on a convex optimization approach for hand-eye calibration to the case where one of the sensors cannot observe the scale of its translational motion (e.g., a monocular camera observing an unmapped environment). We prove that our technique is able to provide a certifiably globally optimal solution to both the known- and unknown-scale variants of hand-eye calibration, provided that the measurement noise is bounded. Herein, we focus on the theoretical aspects of the problem, show the tightness and stability of our convex relaxation, and demonstrate the optimality and speed of our algorithm through experiments with synthetic data.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Robust Positioning Based on Opportunistic Radio Sources and Doppler

Authors
    Lindgren, David (Swedish Defence Research Agency, FOI)
    Nordzell, Andreas (Springbreeze AB)
Abstract

Doppler shift measurements on opportunistic radio sources can be an alternative to GNSS in disturbed environments. Mobile measurements on a GSM base station indicate that the uncertainty is sufficiently low for vehicle positioning, provided that at least two sources are within range and that measurements are fused with an odometer and a rate gyro. A key idea is to fuse the relatively uncertain Doppler measurements with accurate measurements of the vehicle speed. The positioning performance is analyzed by Monte Carlo simulations. A position RMSE in the interval 15-44m can be expected in a suburban environment with limited occlusion.

Presentation

Only for registered participants

Material

Only for registered participants

6 - Observability Driven Multi-Modal Line-Scan Camera Calibration

Authors
    Mehami, Jasprabhjit (University of Technology Sydney)
    Vidal-Calleja, Teresa A. (University of Technology Sydney)
    Alempijevic, Alen (University of Technology Sydney)
Abstract

Multi-modal sensors such as hyperspectral line-scan and frame cameras can be incorporated into a single camera system, enabling individual sensor limitations to be compensated. Calibration of such systems is crucial to ensure data from one modality can be related to the other. The best known approach is to capture multiple measurements of a known planar pattern, which are then used to optimize calibration parameters through non-linear least squares. The confidence in the optimized parameters is dependent on the measurements, which are contaminated by noise due to sensor hardware. Understanding how this noise transfers through the calibration is essential, especially when dealing with line-scan cameras that rely on measurements to extract feature points. This paper adopts a maximum likelihood estimation method for propagating measurement noise through the calibration, such that the optimized parameters are associated with an estimate of uncertainty. The uncertainty enables development of an active calibration algorithm, which uses observability to selectively choose images that improve parameter estimation. The algorithm is tested in both simulation and hardware, then compared to a naive approach that uses all images to calibrate. The simulation results for the algorithm show a drop of 26.4% in the total normalized error and 46.8% in the covariance trace. Results from the hardware experiments also show a decrease in the covariance trace, demonstrating the importance of selecting good measurements for parameter estimation.

Presentation

Only for registered participants

Material

Only for registered participants

We-B1 — Multisensor Data Fusion for Autonomous Vehicles

Chair: Honer, Jens (Valeo Schalter und Sensoren GmbH)|Co-Chair: Baum, Marcus (University of Göttingen)|Q&A Session: Wednesday, 15:45 – 16:10 UTC

Zoom link: Only for registered participants

1 - Deterministic Gibbs Sampling for Data Association in Multi-Object Tracking

Authors
    Wolf, Laura M. (University of Göttingen)
    Baum, Marcus (University of Göttingen)
Abstract

In multi-object tracking, multiple objects generate multiple sensor measurements, which are used to estimate the objects’ state simultaneously. Since it is unknown from which object a measurement originates, a data association problem arises. Considering all possible associations is computationally infeasible for large numbers of objects and measurements. Hence, approximation methods are applied to compute the most relevant associations. Here, we focus on deterministic methods, since multi object tracking is often applied in safety-critical areas. In this work we show that Herded Gibbs sampling, a deterministic version of Gibbs sampling, applied in the Labeled Multi Bernoulli filter, yields results of the same quality as randomized Gibbs sampling while having comparable computational complexity. We conclude that it is a suitable deterministic alternative to randomized Gibbs sampling and could be a promising approach for other data association problems.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Robust Vehicle Tracking with Monocular Vision Using Convolutional Neuronal Networks

Authors
    Dichgans, Jakob (Bundeswehr University Munich)
    Kallwies, Jan (Bundeswehr University Munich)
    Wuensche, Hans Joachim Joe (Bundeswehr University Munich)
Abstract

In this paper we present a robust tracking system that enables an autonomous vehicle to follow a specific convoy leader. Images from a single camera are used as input data, from which predefined keypoints on the lead vehicle are detected by a convolutional neural network. This approach was inspired by the idea of human pose estimation and is shown to be significantly more accurate compared to standard bounding box detection approaches like YOLO. The estimation of the dynamic state of the leading vehicle is realized by means of a moving horizon estimator. We show the practical capabilities and usefulness of the system in real-world experiments. The experiments show that the tracking system, although it only operates with images, is competitive with earlier approaches that also used other sensors such as LiDAR.

Presentation

Only for registered participants

Material

Only for registered participants

3 - OAFuser: Online Adaptive Extended Object Tracking and Fusion Using Automotive Radar Detections

Authors
    Haag, Stefan (Mercedes Benz AG)
    Duraisamy, Bharanidhar (Daimler AG)
    Blessing, Moritz Constantin (Hochschule Esslinge)
    Reiner, Marchthaler (Hochschule Esslinge)
    Koch, Wolfgang (FGAN-FKIE)
    Fritzsche, Martin (Daimler AG)
    Dickmann, Jürgen (Daimler AG)
Abstract

This paper presents the Online Adaptive Fuser: OAFuser, a novel method for online adaptive estimation of motion and measurement uncertainties for efficient tracking and fusion by applying a system of several estimators for ongoing noise along with the conventional state and state covariance estimation. In our system, process and measurement noises are estimated with steady-state filters to obtain combined measurement noise and process noise estimators for all sensors in order to obtain state estimation with a linear Minimum Mean Square Error (MMSE) estimator and accelerating the system’s performance. The proposed adaptive tracking and fusion system was tested based on high fidelity simulation data and several real-world scenarios for automotive radar, where ground truth data is available for evaluation. We demonstrate the proposed method’s accuracy and efficiency in a challenging, highly dynamic scenario where our system is benchmarked with Multiple Model filter in terms of error statistics and run time performance.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Extended Object Framework based on Weighted Exponential Products

Authors
    Bruggner, Dennis (Mentor Grahics, A Siemens Business)
    Clarke, Daniel Stephen (Cranfield University)
    Gulati, Dhiraj (Mentor Graphics, a Siemens Business)
Abstract

Estimating the number of targets and their states is an important aspect of sensor fusion. In some applications, like autonomous driving, multiple measurements stem from extended targets because of multiple reflections from the target’s shape when using high resolution sensors like LiDAR or Radar. Multi-target tracking techniques using point based target assumptions are generally not suitable for these types of sensor measurements. In the last years, a number of techniques have been introduced which use a known shape or estimate the shape to retrieve the position of the object. In this paper we will introduce a novel approach without knowing/estimating the shape but using all the available information by fusing the measurements from one object with a conservative fusion technique based on the Weighted Exponential Product rule. The results show that we obtain similar performance to state-of-the-art approaches in our simulations.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Bayesian Extended Target Tracking with Automotive Radar Using Learned Spatial Distribution Models

Authors
    Honer, Jens (Valeo Schalter und Sensoren GmbH)
    Kaulbersch, Hauke (Georg-August-Universität Göttingen)
Abstract

We apply the concept of random set cluster processes in combination with a learned measurement model to extended target tracking. The spatial distribution of measurements generated by a target vehicle is learned via a variational Gaussian mixture (VGM) model. The VGM is then interpreted as the measurement likelihood of a Multi-Bernoulli (MB) distribution. We derive a closed-form Bayesian recursion for tracking an extended target by the use of random set cluster process. This formulation is particularly successful for sparse and noisy measurements, and is applied to automotive Radio Detection and Ranging (RADAR) detections. Last, we provide a large-scale evaluation of our approach based on the data published in the Nuscenes data set.

Presentation

Only for registered participants

Material

Only for registered participants

6 - A Continuous Probabilistic Origin Association Filter for Extended Object Tracking

Authors
    Berthold, Philipp (Bundeswehr University Munich)
    Michaelis, Martin (Bundeswehr University Munich)
    Luettel, Thorsten (Bundeswehr University Munich)
    Meissner, Daniel (BMW Group)
    Wuensche, Hans Joachim Joe (Bundeswehr University Munich)
Abstract

One major challenge in extended object tracking is the association of a point measurement to its true origin on a target object. The origins of measurements are often spatially distributed over the full extent of the target. The association of measurements to the possible origins within the targets’ extent is difficult, especially for low-resolution sensors which provide only a few measurements per object. We address this using a soft association of a point measurement to its origin candidates on the target. Therefore, association probabilities to different possible origins are calculated for each measurement. These probabilities are weighted according to their probability in the filtering step. We also extend this filter to continuous and not just discrete association possibilities. This allows us to associate point measurements to lines. This paper outlines the derivation of the filter and gives three exemplary applications. A simulation compares the performance of this approach with other filter techniques for tracking a moving line. The transfer of the filter to a moving circle is discussed. Additionally, we discuss its usage for a Doppler-radar-based detection association which exploits the radial speed information. We discuss the advantages and the drawbacks of this approach and give recommendations for the optimization of computation time.

Presentation

Only for registered participants

Material

Only for registered participants

We-A4 — Sensors

Chair: Rao, Nageswara (Oak Ridge National Lab)|Co-Chair: Seel, Thomas (TU Berlin)|Q&A Session: Wednesday, 16:15 – 16:40 UTC

Zoom link: Only for registered participants

1 - Towards an Intuitive Human-Robot Interaction Based on Hand Gesture Recognition and Proximity Sensors

Authors
    Al, Gorkem Anil (University of Bath)
    Estrela, Pedro (University of Bath)
    Martinez-Hernandez, Uriel (University of Bath)
Abstract

In this paper, we present a multimodal sensor interface that is capable of recognizing hand gestures for human-robot interaction. The proposed system is composed of an array of proximity and gesture sensors, which have been mounted on a 3D printed bracelet. The gesture sensors are employed for data collection from four hand gesture movements (up, down, left and right) performed by the human at a predefined distance from the sensorised bracelet. The hand gesture movements are classified using Artificial Neural Networks. The proposed approach is validated with experiments in offline and real-time modes performed systematically. First, in offline mode, the accuracy for recognition of the four hand gesture movements achieved a mean of 97.86%. Second, the trained model was used for classification in real-time and achieved a mean recognition accuracy of 97.7%. The output from the recognised hand gesture in real-time mode was used to control the movement of a Universal Robot (UR3) arm in the CoppeliaSim simulation environment. Overall, the results from the experiments show that using multimodal sensors, together with computational intelligence methods, have the potential for the development of intuitive and safe human-robot interaction.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Evaluation of Optical Motion Capture System Performance in Human-Robot Collaborative Cells

Authors
    Gonzalez Rodriguez, Leticia (University of Oviedo)
    Alvarez, Juan Carlos (Universidad de Oviedo)
    Lopez Rodriguez, Antonio Miguel (University of Oviedo)
    Alvarez Prieto, Diego (University of Oviedo)
Abstract

This article describes a new methodology for the metrological evaluation of a human-robot collaborative environment based on optical motion capture (OMC) systems. By taking advantage of the existing industrial robot in the production cell, the workspace calibration procedure can be automatized, reducing the need of human intervention. The method is inspired on the ASTM E3064 test guide, and the results presented show that the metrological characteristics so obtained are compatible and comparable in quality to the ones with the manual procedure.

Presentation

Only for registered participants

Material

Only for registered participants

3 - Detecting Low-Level Radiation Sources Using Border Monitoring Gamma Sensors

Authors
    Rao, Nageswara (Oak Ridge National Lab)
    Sen, Satyabrata (Oak Ridge National Laboratory)
    Wu, Chase (New Jersey Institute of Technology)
    Brooks, Richard (Clemson University)
    Temples, Christopher (Clemson University)
Abstract

We consider a problem of detecting a low-level radiation source using a network of Gamma sensors placed on the periphery of a monitored region. We propose a computationally light-weight, correlation-based method which is primarily intended for systems with limited computing capacity. Sensor measurements are combined at the fusion by first generating decisions at each time step and then taking their majority vote within a time widow. At each time step, decisions are generated using two strategies: (i) SUM method computes an aggregated test statistic using measurements from individual sensors and compares it to a threshold, and (ii) OR method computes logical-OR of decisions based on thresholds applied to individual sensor measurements. We derive analytical performance bounds for false alarm rates of SUM and OR methods, and show that their performance is enhanced by a majority-based temporal smoothing. Using measurements from a test campaign, we generate a border monitoring scenario with twelve 2"X2" NaI Gamma sensors deployed on the periphery of 42X42 meters outdoor region. A Cs-137 source is moved in a straight-line across this region, starting several meters outside and finally moving away from it. We illustrate the performance of both correlation-based detection methods, and compare their performances with each other and with a particle filter method. Overall, under small false-alarm conditions, the OR fusion is found to produce better detection performance.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Towards Automatic Classification of Fragmented Rock Piles Via Proprioceptive Sensing and Wavelet Analysis

Authors
    Artan, Unal (Queen’s University)
    Marshall, Joshua A. (Queen’s University)
Abstract

In this paper, we describe a method for classifying rock piles characterized by different size distributions by using accelerometer data and wavelet analysis. Size distribution (fragmentation) estimates are used in the mining and aggregates industries to ensure the rock that enters the crushing and grinding circuits meet input design specifications. Current technologies use exteroceptive sensing to estimate size distributions from, for example, camera images. Our approach instead proposes the use of signals acquired from the process of loading equipment that are used to transport fragmented rock. The experimental setup used a laboratory-sized mock up of a haul truck with two inertial measurement units (IMUs) for data collection. Results utilizing wavelet analysis are provided that show how accelerometers could be used to distinguish between piles with different size distributions.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Local and Global Sensors for Collision Avoidance

Authors
    Rashid, Aquib (Fraunhofer IWU)
    Peesapati, Surya Kannan (Fraunhofer IWU)
    Bdiwi, Mohamad (Fraunhofer Institute for Machine Tools and Forming Technology IW)
    Krusche, Sebastian (Fraunhofer IWU)
    Hardt, Wolfram (Tu Chemnitz)
    Putz, Matthias (Fraunhofer Institute for Machine Tools and Forming Technology IW)
Abstract

Implementation of safe and efficient human robot collaboration for agile production cells with heavy-duty industrial robots, having large stopping distances and large self-occlusion areas, is a challenging task. Collision avoidance is the main functionality required to realize this task. In fact, it requires accurate estimation of shortest distance between known (robot) and unknown (human or anything else) objects in a large area. This work proposes a selective fusion of global and local sensors, representing a large range 360° LiDAR and a small range RGB camera respectively, in the context of dynamic speed and separation monitoring. Safety functionality has been evaluated for collision detection between unknown dynamic object to manipulator joints. The system yields 29% efficiency compared to fenced system. Heavy-duty industrial robot and a controlled linear axis dummy is used for evaluating different robot and scenario configurations. Results suggest higher efficiency and safety when using local and global setup.

Presentation

Only for registered participants

Material

Only for registered participants

6 - A Mobile and Modular Low-Cost Sensor System for Road Surface Recognition Using a Bicycle

Authors
    Springer, Matthias (University of Augsburg)
    Ament, Christoph (Augsburg University)
Abstract

The quality of pavements is significant to comfort and safety when riding a bicycle on roads and cycleways. As pavements are affected by ageing due to environmental impacts, periodic inspection is required for maintenance planning. Since this involves considerable efforts and costs, there is a need to monitor roads using affordable sensors. This paper presents a modular and low-cost measurement system for road surface recognition. It consists of several sensors that are attached to a bicycle to record e.g. forces or the suspension travel while driving. To ensure high sample rates in data acquisition, the data capturing and storage tasks are distributed to several microcontrollers and the monitoring and control is performed by a single board computer. In addition, the measuring system is intended to simplify the tedious documentation of ground truth. We present the results obtained by using time series analysis to identify different types of obstacles based on raw sensor signals.

Presentation

Only for registered participants

Material

Only for registered participants

We-B2 — Localization and Tracking

Chair: Fränken, Dietrich (Hensoldt Sensors)|Co-Chair: Ahmed, Nisar (University of Colorado Boulder)|Q&A Session: Wednesday, 16:15 – 16:40 UTC

Zoom link: Only for registered participants

1 - Bayesian Deghosting Algorithm for Multiple Target Tracking

Authors
    Kulmon, Pavel (Czech Technical University in Prague, Department of Applied Info)
Abstract

This paper deals with bistatic track association in classical FM based Multi Static Primary Surveillance Radar (MSPSR). We formulate deghosting procedure as Bayesian inference of association matrix between bistatic tracks and targets as well as target positions. To do that, we formulate prior probability distribution for the association matrix and develop custom Monte Carlo Markov Chain (MCMC) sampler, which is necessary to solve such a hybrid inference problem. Using simulated data, we compare the performance of the proposed algorithm with two others and show its superior performance in such a setup. At the end of the paper, we also outline further research of the algorithm.

Presentation

Only for registered participants

Material

Only for registered participants

2 - Identification of Kinematic Vehicle Model Parameters for Localization Purposes

Authors
    Fazekas, Mate (SZTAKI)
    Gaspar, Peter (SZTAKI)
    Nemeth, Balazs (MTA SZTAKI Institute for Computer Science and Control)
Abstract

The article proposes a parameter identification algorithm for a kinematic vehicle model from real measurements of on-board sensors. The motivation of the paper is to improve the localization in poor sensor performance cases. For example, when the GNSS signals are unavailable, or when the vision-based methods are incorrect due to the poor feature number, furthermore, when the IMU-based method fails due to the lack of frequent accelerations. In these situations the wheel encoder-based odometry can be an appropriate choice for pose estimation, however, this method suffers from parameter uncertainty. The proposed method combines the Gauss-Newton non-linear estimation techniques with Kalman-filtering in an iterative loop and identifies the wheel circumferences and track width parameters in three steps. The estimation architecture eliminates the convergence to a local optimum and the divergence resulted in the highly uncertain initial parameter values. The identification performance is verified by a real test of a compact car. The results are compared with the nominal setting, which should be applied in the lack of identification.

Presentation

Only for registered participants

Material

Only for registered participants

3 - Effect of Kernel Function to Magnetic Map and Evaluation of Localization of Magnetic Navigation

Authors
    Takebayashi, Takumi (Graduate School of Utsunomiya University)
    Miyagusuku, Renato (Utsunomiya University)
    Ozaki, Koichi (Utsunomiya University)
Abstract

Localization is one of the most fundamental requirements for the use of autonomous robots. In this work, we use magnetic localization; which, while not as accurate as laser rangefinder or camera-based systems, is not affected by a large number of people on its surrounding, making it ideal for applications where this is expected, such as service robotics in supermarkets, hotels, etc. Magnetic-based localization systems first create a magnetic map of the environment using magnetic samples acquired a priori. An approach for generating this map is to use collected data to training a Gaussian Process model. Gaussian Processes are non-parametric, data-drive models, where the most important design choice is the selection of an adequate kernel function. The purpose of this study is to improve the accuracy of the magnetic localization by testing several kernel functions and experimentally verifying its effects on robot localization.

Presentation

Only for registered participants

Material

Only for registered participants

4 - Dynamic Adaption of Noise Covariance for Accurate Indoor Localization of Mobile Robots in Non-Line-Of-Sight Environments

Authors
    Ghosh, Dibyendu (Intel Corporation)
    Honkote, Vinayak (Intel Corporation)
    Narayanan, Karthik (Intel Corporation)
Abstract

The estimation of robot pose in an indoor and unknown environment is a challenging problem. Traditional methods using wheel odometry and inertial measurement unit (IMU) are inaccurate due to wheel slippage and drift related issues. Ultra-wide-band (UWB) technology fused with extended Kalman filter (EKF) approach provides relatively accurate ranging and localization in a line-of-sight (LOS) scenario. However, the presence of physical obstacles {such as, walls, doors etc. called as non-line-of-sight (NLOS)} in an indoor environment pose additional challenges which are difficult to address using UWB alone. Identification of LOS/NLOS information can greatly benefit many location-related applications. To this end, an algorithm based on variance measurement technique of distance estimates along with power envelope of the received signal is proposed for NLOS identification. Further, adaptive adjustment of sensor noise covariance approach is devised to mitigate the NLOS effect. The proposed methodology is computationally light and is thoroughly tested. The results demonstrate that the proposed method achieves ∼2X improvement in accuracy compared to existing approach.

Presentation

Only for registered participants

Material

Only for registered participants

5 - Localization and velocity estimation based on multiple bistatic measurements

Authors
    Woischneck, Sebastian (Hensoldt Sensors)
    Fränken, Dietrich (Hensoldt Sensors)
Abstract

This paper discusses algorithms that can be used to estimate the position and possibly in addition the velocity of an object by means of bistatic measurements. Concerning position-only estimation based on bistatic range measurements, improved versions of an approximate maximum-likelihood estimator will be introduced and compared with methods known from literature. The new estimators will then be extended to also estimate velocity based on additional range-rate measurements. Simulation results confirm that the proposed estimators yield errors close to the Cramer-Rao lower bound.

Presentation

Only for registered participants

Material

Only for registered participants