User:R.o.b.stiemsma@student.tue.nl/SLAMwriteup

From Control Systems Technology Group
< User:R.o.b.stiemsma@student.tue.nl
Revision as of 13:19, 22 June 2020 by R.o.b.stiemsma@student.tue.nl (talk | contribs) (Saving before something crashes my computer)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

FastSLAM2

In order to describe FastSLAM2, as well as the choice to use it, we will use deepening steps from the overall problem of SLAM, through the generic strategies to solve it, down into the algorithm itself.

The SLAM Problem

To solve the SLAM problem is to answer the age-old question

What came first, the chicken or the egg?

In less symbolic language, let's put it this way: during operation of the robot, in almost every timestep k ∈ ℕ, we attempt to answer the question 'Where are we?`, which implicitly contains two much more concrete questions;

What is my position, relative to my surroundings?

What are my surroundings and their meaning or position relative to me?

Without any mathematics involved, one can already start to see an emerging intrinsic relationship between the localisation problem and the mapping problem.

Certainty cannot be achieved during timestep k, and as such, the art is to get as much certainty as is possible without exceeding a bearable workload (as to find ourselves too computationally burdened to be ready for the next timestep, k+1. There are many ways to cope with this problem, from the simplistic `not dealing with it at all`, using a Zero-Order Hold (ZOH), or a ZOH with inclusion of the proposed movement and positions from the current timestep, all the way to the rather absolute, offline `GraphSLAM` method, which saves countless measurement points, and tries to optimize the sense these points make relative to each other (or: attempts to minimize the magnitude of discrepancy between measure points).

Our problem is marked by being online (PICO needs to know what's going on while it's happening) in a rigid environment, under uncertain starting conditions, with noisy motion and noisy measurements. PICO additionally has to deal with random, dynamic elements, which one could attempt to deal with in the SLAM module of a program, but we won't, simply so we can keep using a rigid world assumption. Instead, the workload for coping with dynamic elements will be on feature recognition, which will have to make the judgment call to name a random element static (and add it to the map of rigid elements) or dynamic.


In the world of online SLAM methods, we will very quickly find ourself submerged in the world of world features considered through Extended Kalman filters (EKFs), and since we have noise (or: a constant threat to localization certainty), as well as an uncertain starting position, we will also find ourselves looking to use a particle filter as to not find ourselves locked into a single wrong hypothesis.

This brings us to the next section, where we will explain what the inner workings of an EKF particle filter are.

Simultaneous Localization and Mapping using Particle Filters

In this section, we will expand on the basis given by the course lectures. Most additional knowledge here is derived from the (freely available) lectures on SLAM by Cyrill Stachniss, a full-time professor at the university of Bonn.


Let us consider first the particle filter side of our fledgling concept of SLAM. Each particle within the cloud of n particles represents a hypothesis, containing within itself a presumed location (x,y) of the platform - PICO - and its assumptions about the surroundings - the EKF part of the problem. With each update, these hypotheses are somehow updated under assumptions about the predicted motion since the last timestep and under interpretation of the measurements received in the current timestep.

The crux of a particle filter solution is to somehow weight assumptions with measurements. To keep things formal, let us steal a little part of the EKF side of the problem, and already introduce the concepts of a prior evaluation (the assumed position) and posterior evaluation (the expectant position based on measurements). This, formally, gives us the following primary mechanic to work our particle filter with:

MRC2020 Group4 SLAM Weight.png

Armed with a definition for the weight (or 'belief') we give a certain hypothesis, we can talk about the workings of our particle filter. After every timestep, every particle - and thus every hypothesis - has been given a weight; these weights have meaning relative to one another, and for the sake of keeping the risk of binary overflow errors to an minimum, these are linearly normalized somehow relative to some measure - e.g. the sum weight or the maximum weight.

We now steal a page from the concepts constituting machine learning; we cull hypotheses that are too weak, and let the likely hypotheses dominate the cloud. That is: some particle's respective weights, after normalization, are weaker than some minimum parameter, and so these hypotheses are destroyed. After this, we somehow decide whether our hypothesis population is underpopulated, and repopulate it with n_new new particles based on the pool of hypotheses we currently have during a resampling process. This resampling process is best described as spinning a wheel, where the portions allocated for particular particles on this wheel are scaled depending on their weights.

Literature suggests that low-variance resampling is the way to go here, which is an algorithm that is often instead called 'roulette wheel resampling'. The idea is that normal spin-the-wheel-with-every-pick resampling has a (completely unnecessary) chance to arbitrarily favor a particular group of hypotheses by repeatedly picking their members as resampling candidates. Roulette wheel resampling, instead spins the wheel once, and then jumps around the wheel n_new times, with jumps scaled by the total weight of all particles, divided by n_new.

MRC2020 Group4 SLAM StachnissRoulette.png

The above picture, from Stachniss' slides on probabilistic robotics, are a clear illustration of the normal pick-every-sample algorithm (left) and the roulette wheel approach (right). Note that, additionally, the simple approach has algorithmic time complexity O(n log(n)), as every sampling also needs to be looked up by index in the particle list, whereas the roulette wheel approach does lookup intrinsically as part of generating the 'spokes' of the wheel, and hence has algorithmic complexity O(n).


This concludes the particle filter side of our understanding of SLAM. In one pseudocode block, let us summarize the core concepts before we dive into the ugly side that is Gaussian predicates:

Cloud particleFilter(Cloud particles, PredictedMotion u, Observation z)
test