Mobile Robot Control 2024 The Iron Giant: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(Added the description, code and screepcaptures for exercise 1 and 2)
 
(97 intermediate revisions by 6 users not shown)
Line 23: Line 23:
|}
|}


== Week 2 exercises ==
== '''Week 1: The art of not crashing''' ==
'''{Setup} Exercise 1, [NAME]:'''


[link to Gitlab code with comments (We should discuss how we want to format the exercises branch of git.)]
'''Think of a method to make the robot drive forward but stop before it hits something.'''


[Design choices, what distance to stop, how does your code retrieve the distance, did you find any limitations?]
''These exercises were performed individually by the members leading to different outcomes.'' 


[Screen capture of simulation driving and stopping]
There were a lot of similarities between the answers. Every group member used the laser data to determine if objects were close. All members invented a way to loop over the laser range data and check the individual values to see whether that value was lower than a safety threshold. If an object came close enough to the robot, in all cases a signal would be sent to the motor to stop moving forward. However, the way members determined whether an object was in front differed per member; 


'''{Setup} Exercise 2, [NAME]:'''
Ruben decided to not loop over all laser ranges but only check the ones in front of the robot. To determine which laser data actually represents the area the robot is going to drive to, a geometric calculation is executed, using the arc tangent of the required safety distance and the width of the robot to determine the maximum angle the laser data needs to check. After checking whether objects are not too close, it adds a geometric term to make sure the safety distance is consistent when looking at a line parallel to the driving direction. So the beams to check in front of the robot is not shaped like a circle due to the LiDAR. 


[link to Gitlab screen capture, I guess we want a video to see the robots stopping, lets aim for 20s max per person. I would recommend https://wiki.ubuntuusers.de/recordMyDesktop/, very easy to use (! Note Linux has built in Screen capture software.... use ctrl, alt, shift r to start and stop recording, capture is stored in videos). To make it more compact, convert it to mp4 in you machine and it can then easily be pushed to gitlab. If people feel this is overkill, lets discuss wednesday.][what did you notice?]
Lysander made his robot start turning when an object was detected. In the next loop the robot would therefore get different laser scan data and after a few loops the object might be outside the angles the laser scanner can check and so it will drive forward again as it has turned away from the obstacle.


{Vergeet dit niet in het Engels te schrijven!}
# For all members the laser data was read.
# A loop was created over the data to either pick out the smallest laser scan distance.
#* In Ruben's case, the angles of the laser data were also used to select which specific laser beams should be included when determining it is safe enough to drive into a certain direction.
# If one out of the (selected) beams measured a point smaller than the safety distance the robot stops.


'''Run your simulation on two maps, one containing a large block in front of the robot, the second containing a block the robot can pass by safely when driving straight.''' 
{| class="wikitable"
|+
|Name
|Code
|Video exercise 1
|Video exercise 2
|-
|'''Lysander Herrewijn'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/dontcrash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map1_lysander.mp4?ref_type=heads Screen capture exercise 2 map 1]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map2_lysander.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Adis Husanovic'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/src/dont_crash.cpp Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]  [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Marten de Klein'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/src/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/default_map_Marten.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Ruben van de Guchte'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]'''
|-
|'''Vincent Hoffmann'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/src/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise1_vincent.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map1_vincent.mp4?ref_type=heads Screen capture exercise 2 map 1]'''    '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map2_vincent.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Timo van der Stokker'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/src/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/DefaultMap_Timo.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/Map1_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 1]'''    '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/Map2_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 2]'''
|}
All runs stop in time in front of the walls for exercise 1 and for map 1 in the second exercise. However, for the second map, only Marten's, Adis' and Ruben's code caused the robot to drive by the obstacle as it was safe. Adis and Marten implemented a safety distance small enough such that the robot drove past the obstacle. Ruben's method is more dynamic and suitable in a wider range of cases. It it also better at looking at upcoming obstacles further away. The direction in which the robot is not allowed to drive if a laser point is closer than its safety distance becomes smaller as the robot gets closer to a gap. We deemed this preferrable, as the robot should be able to navigate smaller corridors. So, a more dynamic approach should be implemented for the local navigator.


=== Practical demonstration ===
The laser received less noise than we expected, it is fairly accurate with its measurements. However, only items at a certain height of the laser can be seen, as the laser only works on its own height. For example, when standing in front of the robot, the laser could only detect our shins as two half circles (the front of the legs).


'''Exercise 1, Lysander Herrewijn:'''
When testing our don't crash files on the robot, it stood out that the stopping distance needed to include the distance from the measuring point (the LiDAR) to edge of the robot. This was measured to be approximately 10 cm. After changing this the robot was first tested on a barrier as seen in '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Weekly_exercises/Week1/Practical_exercise1/Stop_barrier.mp4 Robot stopping at barrier]'''.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1lysander?ref_type=heads Code]'''
Next, we let a person walk in front of it to see if the code would still work. Fortunately, it did, as can be seen in '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Weekly_exercises/Week1/Practical_exercise1/stop_moving_feet.mp4 Robot stopping at passing person]'''.


The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning.  
Finally we tested Lysander's code that turns the robot when it sees an obstacle, and then continues. This gives an indication of the blind spot of the robot. As it will only drive if everything it sees within its vision range is not within its safety distance. This can be seen in '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Weekly_exercises/Week1/Practical_exercise1/Stop_turn_feet.mp4 Robot stopping and turning at feet]'''.
== '''Week 2: Local Navigation''' ==
[[File:Example VFH.png|thumb|A theoretical view of the VFH technique, the numbers are made up and only for illustration purposes. The robot counts the amount of beams that hit an objects within its LiDAR in a certain angular bracket. The range of angles is what we have approximated  based on our tests [-110°, 110°]. ]]


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''
=== '''Vector field histogram (VFH)''' ===


==== Implementation: ====
The simplified vector field histogram approach was initially implemented as follows.


'''Exercise 2, Lysander Herrewijn:'''
The robot starts out with a goal retrieved from the global navigation function, its laser data is received from the lidar and its own position, which it keeps track of internally. A localization function will eventually be added to enhance its ability to track its own position. The laser data points are grouped together in evenly spaced brackets in polar angles from the robot. For the individual brackets, the algorithm checks how many LiDAR points' distance is smaller than a specified safety threshold distance. It then saves the number of laser data points that are closer than the safety distance for every bracket. This is indicated by the blue bars in the plot to the right.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise2map1_lysander.mp4?ref_type=heads '''Screen capture exercise 2 map 1''']
Next it calculates the direction of the goal by computing the angle between its own position and the goal position. It then checks whether the angle towards the goal is unoccupied by checking the values of the bracket corresponding to that angle. A bracket is unoccupied if the amount of LiDAR hits within the safety distance are fewer than the threshold value. Additionally, some extra neighboring brackets are checked depending on a safety width hyperparameter. The goas of the safety width is to make sure the gap is wide enough for the robot to fit through. If the direction towards the goal is occupied, the code will check the brackets to the left and to the right and save the closest unoccupied angle at either side. It then picks whichever angle is smaller, left or right. and sets that angle as its new goal. If we look at our example histogram, we can see that the forward direction to the goal is open. However, depending on the amount of safety brackets, its behavior might change. In our case, we use 100 brackets as is shown in the example histogram. In some cases, we have used about 5 safety brackets. In the example case, the robot would steer slightly to the positive angle direction (to the right) to avoid the first brackets left from x=0 above the safety threshold. 


The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall.  
Afterwards it compares its own angle with the goal angle and drives forwards if it aligns within a small angular margin towards the direction of the goal. The global navigator keeps track of the position of the robot and updates the goal if the robot is close enough to its current goal.  


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise2map2_lysander.mp4?ref_type=heads '''Screen capture exercise 2 map 1''']
This initial implementation had some oversights and edge cases that we came across when testing using the simulator and the real robot. The first issue arose when the system was tested to turn a corner after driving next to an obstacle. At this small distance the cone of laser data that needs to be checked is very large. Though if this cone were used to look at larger distances, the robot would discard routes it could easily fit through. This was fixed by performing a second check on the laser data at the side the robot is turning to and if this check finds an object moving forward a bit more before turning. This solved our problem where the robot would steer into objects or walls too early.  


In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.
'''Advantages''':


'''Exercise 1, Adis Husanovic:'''
* Implementing VFH for navigation is relatively straightforward, requiring basic processing of LiDAR data to compute the histogram of obstacles' directions.
* VFH can generate smooth and collision-free paths for the robot by considering both obstacle avoidance and goal-reaching objectives
* VFH is computationally efficient and robust to noisy sensor data


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/main/exercise1_Adis?ref_type=heads Code]'''
'''Disadvantages''':


The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle.
* VFH may have difficulty distinguishing overlapping obstacles, especially if they are close together and occupy similar angular regions in the LiDAR's field of view. This can be solved by taking a small distance at which LiDAR points are taken into account.
* In complex environments with narrow passages or dense clutter, VFH may struggle to find feasible paths due to the limited information provided by the LiDAR sensor and the simplicity of the VFH algorithm.
* VFH performance can be sensitive to parameter settings such as the size of the histogram bins or the threshold for obstacle detection. Tuning these parameters for optimal performance may require extensive experimentation.  


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''
==== Possible failure scenarios: ====


* '''A possible failure scenario''' encountered in testing was that the robots orientation deviated so far from the goal that it became unable to directly check the direction the goal was in. So, if the robot has to turn so far that the goal is in the 'blindspot' of 360-220=140 degrees, the robot would assume there is an available path in that direction and rotate towards it. After rotating that direction it would see that it is occupied and turn the other way towards space it could move to and this cycle would repeat.
** '''Solution:''' This problem could be solved by implementing an internal representation of the points that were detected before as was done in [https://doi.org/10.1109/70.88137 Borenstein, J., & Koren, Y. (1991)].  In our implementation we instead chose to solve this by making sure there are no such challenges in the path by choosing more intermediate goal points in the global navigator.


'''Exercise 2, Adis Husanovic:'''
* '''Another possible failure scenario''' presents itself when the robot needs to drive more than 90 degrees away from its goal. In this case driving away actually puts the robot further from its goal and as it has no memory will forget that there is an obstacle that caused it to need to turn this way in the first place. And after driving such that the obstacle exits its detection range, it will try to turn back. This will cause the robot to cause its behavior to loop between driving away and towards the obstacle.
** '''Solution:''' We again avoid this failure case by only letting the local navigator handle simple paths towards intermediate goals.


In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.
==== Implementation of local and global algorithms: ====
The global and local planner can be easily connected via a main script. In the main script the global planner is called first generating a set of target goals. Then the local planner is called and only given the first target goal. The main script keeps track of the distance from the robot to the target goal and if this distance is smaller than a given value the local planner is given the next target goal until it reaches the final target goal and the main velocity is set to zero. In case a target goal is not reachable (e.g. the distance to the target goal does not reduce for a given time) the global planner should be called again with an updated map.  


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]'''
==== Demonstrations: ====
'''Simulation:'''


In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.
The simulations show how the robot navigates through the maps. There are a few things to notice. Firstly, the robot is not very dynamic, it can only turn when standing still. If the direction the robot is facing is safe it will drive forward. Otherwise it will turn. In the case the VFH will be used in the final challenge, a distinction must be made such that the robot does drive forward during turning if no object is directly in front of it. Furthermore, we can see how the VFH finds local minima and manages to escape it by turning around and exploring the other side. This happens, for example, in map 3 after turning past the first corner. The settings are such, that the robot can pass the small corridor on map 4 after the first corner. As we selected only a few safety brackets to pass this corridor, we can see that the robot cuts corner fairly tight in the simulation. For the real life scenario, this should be adjusted to attain a bigger safety margin around corners.  


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week2/Histogram/Captures/map3_histogram.webm?ref_type=heads '''Simulation Demonstration Video MAP3 VFH''']


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week2/Histogram/Captures/map4_histogram.webm?ref_type=heads '''Simulation Demonstration Video MAP4 VFH''']


'''Exercise 1, Marten de Klein:'''
'''Practical demonstration:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/exercises/exercise1_Marten?ref_type=heads Code]'''
The first obstacle show the robot nicely passing an obstacle relatively tightly. It moves on and does not stop and turn, knowing it will make it. For the second object, he stops, turns, drives and repeats this a few times before he passes the corner. This is an example when driving forward in combination with turning could give a more smooth and also faster result. All in all, the VFH has proven to be a reliable method in avoiding obstacles.


The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop.
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week2/Histogram/Captures/VFH_practical_demonstration.mp4 '''Experimental Demonstration Video VFH''']
----


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/default_map_Marten.mp4 Screen capture exercise 1]'''
=== '''Dynamic Window Approach (DWA)''' ===
[[File:DWA Explanation.png|thumb|553x553px]]


==== Implementation''':''' ====
The DWA algorithm is a velocity based algorithm based on maximizing an objective function. The general overview is shown in the figure to the right.


'''Exercise 2, Marten de Klein:'''
The algorithm starts by defining the velocity and acceleration constraints, such as velocity and acceleration limits, bracking accelerations and safety distances. It then creates a set of possible velocities, creating a grid from minimum to maximum allowable velocities in discrete steps. Then, the velocities that are not reachable within a single time step, based on the current velocity and acceleration limits, are filtered out. Each remaining velocity in the set is checked for potential collisions with obstacles detected by the laser sensors. If the combination of the forward and angular velocity would result in a collision, it is removed from the set. This is done by calculating the distance to obstacles and comparing it with the robot’s stopping distance. The distance to an abstacle is calculated based on the curved trajectory it will follow. Is the velocity only forward and not rotational then the distance is simply calculated based on the shortest distance.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''
A pointing system is used in order to determine which combination of the forward and angular velocity suits the best at each step. For each feasible velocity, a score is computed based on three criteria in objective function. The first one is the heading towards the goal, the second one is the forward velocity, and the third one is the distance to the nearest obstacle. The weights for these criteria are defined to balance their importance where each individual criteria is first normalised. The weights are tuned based on the performance in simulation and experiments.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
Finally, the robot’s velocity is updated with the highest-scoring velocity, resulting in the robot to move in a safe and smooth manner.


The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.


'''Advantages:'''


'''Exercise 1, [Ruben van de Guchte]:'''
* Effective at avoiding obstacles detected by the LiDAR sensor in real-time. It dynamically adjusts the robot's velocity and heading to navigate around obstacles while aiming to reach its goal.
* Focuses on local planning, considering only nearby obstacles and the robot's dynamics when generating trajectories. This enables the robot to react quickly to changes in the environment without requiring a global map.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/exercises/Exercise1_Ruben?ref_type=heads Code]
'''Disadvantages:'''


The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close.  
* Can get stuck in local minima, where the robot is unable to find a feasible trajectory to its goal due to obstacles blocking its path. This can occur in highly cluttered environments or when the goal is located in a narrow passage.
* Does not always find the optimal path to the goal, especially in environments with complex structures or long-range dependencies.
* The performance can be sensitive to the choice of parameters, such as the size of the dynamic window or the velocity and acceleration limits of the robot. Tuning these parameters can be challenging and may require empirical testing.


The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and ifthe robot were going very fast that an unlucky timing might push it within the safety distance.  
====  Possible failure scenarios: ====
* '''A possible scenario''' is a target position inside a wall. In this case the robot will never reach the target position because the required velocity to reach the target will be ommitted from the velocity set. Instead it will either stop as close as possible to the target position when no velocities are feasible anymore due to collision risk or the robot will keep moving around the target position trying to minimise the heading angle.
** '''Solution:''' ensuring that a target position is never inside a wall could be a method, however this only works for known obstacles. If the target position is part of a set of target positions the robot could try to skip this target position and immediately go to the next position, with the risk of getting stuck in a local minima. The best solution is giving the global planner a signal with an updated map and letting it create a new set of target positions.
* '''A second possible scenario''' is that the target position is inside the radius of curvature of the robot resulting in a circling motion of the robot around the target without ever reaching the target.
** '''Solution:''' this scenario can be fixed by tuning the parameters of the algorithm. Ensuring that the heading angle is more important that the forward velocity is a method of avoiding getting stuck in a radial loop. However, when the target is still far away this is less desired because obstacle avoidance works better with a high score on the forward velocity. A solution would be to create a somewhat dynamic parameter setting where if the robot is located far away from the target it uses different values for the tuning parameters compared to when it is close to the target.
* '''A third possible scenario''' is that odometry data is not correct resulting in a wrong calculation of the desired heading angle. This would result in the robot targetting a different goal location.
** '''Solution:''' A solution is to reduce the acceleration values to reduce the robot slipping. This makes the robot slightly slower but better controlled. Another solution which will be implemented later will be the addition of a localization algorithm that ensures that the robot has a higher positional accuracy.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]
====  Linking global and local planner: ====
The global and local planner can be easily connected via a main script. In the main script the global planner is called first generating a set of target goals. Then the local planner is called and only given the first target goal. The main script keeps track of the distance from the robot to the target goal and if this distance is smaller than a given value the local planner is given the next target goal until it reaches the final target goal and the main velocity is set to zero. In case a target goal is not reachable (e.g. the distance to the target goal does not reduce for a given time) the global planner should be called again with an updated map.


==== Demonstrations: ====
'''Simulation:'''


'''Exercise 2, [Ruben van de Guchte]:'''
In the first video, the robot moves away from the first obstacle to the right. Then it turns back left before it has passed the obstacle. This has to do with the method of only looking in a cone in front of itself and it first does not see the walls beside himself since the hallway is quite wide. When the robot gets closer to the obstacle it again turns right and passes it with a safe distance between itself and the obstacle. Then it rotates itself to the opening and drives in a smooth motion to its final destination.


After finetuning the width of the robot it works nicely.
In the second video, the robot smoothly moves away from its first obstacle. Then it turns back left to move away from the second obstacle. After the second obstacle it orients itself back to the goal and chooses to go around the third obstacle on the right. In some simulations it chose the other direction, this is purely based on where it sees an opening first. Overal a safe distance is held and the obstacles are already taken into account from quite far away.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week2/DynamicWindow/Captures/map2loc_DWA.webm?ref_type=heads Simulation Demonstration Video MAP2 DWA]'''
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week2/DynamicWindow/Captures/map3loc_DWA.webm?ref_type=heads Simulation Demonstration Video MAP3 DWA]'''
 
'''Practical''' '''demonstration:'''
 
In the video, the robot smoothly moves towards its goal, adjusting its path as needed when it encounters obstacles. At the start, the robot moves towards the goal until it meets a wall, then it turns in the right direction to avoid the obstacle. After turning, it keeps moving parallel to the wall. Approaching a corner formed by two walls, the robot slows down, ensuring it has enough space to make a left turn. There is enough space available for the robot to fit through the left gap so it turns there. After turning left, the robot again moves forward parallel to the wall while keeping a safe distance from the adjacent wall. When it completed it last turn around the wall, the robot arrived at its destination accurately enough and stopped smoothly.
 
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/localPlannerDynamicWindow/Weekly_exercises/Week2/Experimental_Demonstration_DWA.mp4?ref_type=heads '''Experimental Demonstration Video DWA''']
 
== '''Week 3: Global Navigation''' ==
 
 
The goal of this project is to reach a certain destination. In order to reach the destination, a plan must be made which the robot will follow. This larger scale plan is called the global plan. This plan is made through the global planner. The global planner that is used throught this project is '''A*''' which will find the most optimal path for a graph. To create the graph on which a path is planned, the function Probablistic Road Map (PRM) is used. This function generates a random generated graph for a map. So, by combining these functions a optimal path can be generated for a given map.
[[File:Efficient small maze nodes.png|thumb|221x221px|Efficiently placed nodes]]
 
'''How could finding the shortest path through the maze using the A* algorithm be made more efficient by placing the nodes differently? Sketch the small maze with the proposed nodes and the connections between them. Why would this be more efficient?'''
 
The approach of placing nodes only at locations where the robot can change direction, known as "corner-cutting" or "graph-pruning," is an efficient way to implement the A* algorithm for finding the shortest path through a maze. By reducing the number of nodes and connections that need to be explored, this method significantly reduces the computational complexity and search space. It eliminates redundant paths, simplifies connectivity representation, improves memory efficiency, and generates smoother paths by allowing the robot to move directly between corner nodes. The efficiently place nodes can be seen in the figure on the right.
 
'''Would implementing PRM in a map like the maze be efficient?'''
 
PRM works best in open spaces as it generates random points. In a maze PRM likely does not generate points at the turning points. Additionally, PRM is not ideal for narrow hallways for points are just as likely to be generated near walls. Causing unwanted wiggle movements by the robot as is moves from point to point. Furthermore, placing a point within a narrow hallway is very difficult for many missplaced points will likely happen before PRM finally finds the specific coordinate where a point can be placed. So, PRM will fail a lot in trying to place points in hallways, therefore, being a less ideal. As for the maze is basically multiple narrow hallways, the PRM algorithms will be much less efficient.   
 
'''What would change if the map suddenly changes (e.g. the map gets updated)?'''
 
If the map suddenly changes, meaning that after the initial path is made the map gets updated, the robot will keep following the path that it initially obtained from the A* algorithm using the original map. Meaning it will try to keep following the points the global path has determined, but still try to avoid obstacles with the local navigator. So, the local navigator tries to reach a distance of 0.2 meters away from the points determined by the glocal path while avoiding obstacles. As the global path is outdated, the eventual path the local navigator takes will probably end up being stuck in a local minima in an unexpected corner for example. 
 
'''How did you connect the local and global planner?'''
 
The global and local planner are connected via a main script. In the main script the global planner is called first generating a set of points using PRM, then A* is used to determine target goals in the set of points. Then the local planner is called and only given the first target goal. The main script keeps track of the distance from the robot to the target goal and if this distance is smaller than a given value the local planner is given the next target goal until it reaches the final target goal and the main velocity is set to zero. This implementation still assumes that the global planner knows the map quite well and that there are no stationary unknown obstacles which hinder the path of the robot.
 
'''Test the combination of your local and global planner for a longer period of time on the real robot. What do you see that happens in terms of the calculated position of the robot? What is a way to solve this?'''
 
The misalignment between the real robot's position and its calculated position is caused by factors like wheel slippage and odometry inaccuracies. This leads to navigation errors. Localization techniques, such as particle filters, estimate the robot's actual position within the global map by updating pose hypotheses based on sensor data. Integrating localization with planners improves navigation accuracy, enables path planning from the correct position, and allows recovery from accidental displacements.[[File:PRM node map.png|thumb|191x191px|Node map when using PRM]]
 
'''Run the A* algorithm using the gridmap (with the provided nodelist) and using the PRM. What do you observe? Comment on the advantage of using PRM in an open space.'''
 
The observed difference between the routes generated by the A* algorithm using a gridmap representation versus a probabilistic roadmap (PRM) is notable. With the gridmap approach, the robot's path is constrained to horizontal and vertical movements along the grid cells, resulting in a stair-step pattern. While this ensures obstacle avoidance, it can lead to longer and less efficient paths, particularly in open spaces or environments with sparse obstacles.
 
In contrast, the PRM approach offers a significant advantage. By representing the environment as a graph with nodes denoting collision-free configurations and edges representing feasible paths between them, the A* algorithm can find more direct and diagonal routes through the roadmap. This allows the robot to take shortcuts through open areas, minimizing unnecessary turns or detours. Consequently, the paths generated by the PRM approach are typically shorter and have fewer intermediate waypoints to follow, simplifying navigation and control. Additionally, when generating the node map for PRM, the walls are also inflated to avoid nodes being placed too close to walls.  The advantage of the PRM is particularly apparent in open spaces, where it enables the robot to take the most direct routes possible, leading to faster navigation, reduced energy consumption, and potentially smoother trajectories. The comparison can be seen in the figure on the right.[[File:Comparison map.png|thumb|Comparison between using a grid map or using a probabilistic road map for the A* algorithm]]
=== Simulation results ===
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/connectionTest/Weekly_exercises/Week3/global_planner_captures/VFH_globalplanner.webm?ref_type=heads VFH local + global planner simulation]'''
 
The VFH is a strong contender for the final challenge due to its strong obstacle avoidance. Incorporating VFH with the global planner was easily initially. However, it turned out there were still major issues. The robot would traverse the corridor well, and the center of the map built in by walls, but it would not correctly follow the global planner. The local planner could be created such that it would drive to the goal just by avoiding walls.
 
The reported simulation is a debugged version where we can see the VFH local planner nicely follows the nodes of the global planner, which switches to the next node when the robot is closer than 20cm to the node. However, it is not completely debugged yet. Due to an implementation of the arctangent function (atan) the desired angle would flip 2 pi and the robot would turn completely before driving onward. This happens when position_x-goal_x>0 flips to position_x-goal_x<0, or for the y-direction. We will attempt to fix this before the final challenge. If this is fixed, the goal is to add a script that allows the robot to drive forward and turn when no objects are in front of it. Making its trajectory more smooth. It does eventually reach the goal in the simulation. The practical demonstration shows an older version where the goal is incorrect due to similar mistakes in the calculation of the goal and its position.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/connectionTest/Weekly_exercises/Week3/global_planner_captures/globalplanner_DWA.webm?ref_type=heads DWA local + global planner simulation]'''
 
When used in the simulation, it can be observed that the DWA effectively guides the robot to follow the globally planned path while smoothly navigating around obstacles and avoiding collisions with walls. As the robot approaches a wall or obstacle, the DWA algorithm continuously evaluates the robot's velocity and steering commands within a dynamic window of admissible values.
 
By continuously reevaluating the robot's commands and choosing the optimal velocity and steering angle, the DWA ensures that the robot slows down and adjusts its trajectory when approaching obstacles or walls. This behavior can be observed in the video, where the robot smoothly decelerates and turns away from walls as it gets closer to them. The DWA's ability to adapt the robot's motion in real-time based on its current state and surroundings allows for safe and efficient navigation, even in cluttered or dynamic environments.
 
=== Practical demonstration ===
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/connectionTest/4_localization/Histogram_local_global.mp4?ref_type=heads VFH '''local + global planner practical demonstration''']
 
The issue where the robot thinks it has reached its destination halfway through the path is likely caused by an incorrect implementation of the arctangent function (atan) used for calculating the robot's orientation or heading. However, if this function is not implemented correctly, particularly in handling the signs of the displacements or the quadrant mapping, it can result in the calculated angle flipping or being off by 180 degrees (or any other multiple of pi radians).
 
This error in angle calculation can propagate through the robot's motion model, causing the robot to think it is facing the opposite direction or at a completely different location on the map than its actual position. For example, if the angle calculation flips by 180 degrees midway along a straight path, the robot's perceived orientation will be inverted, and it may interpret its current position as the endpoint or destination, even though it has only traversed half of the intended path. The simulation shows a debugged version of the same VFH method. Earlier simulation attempts moved the robot to exactly the same endpoint in the simulation as for the world map test.
 
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/connectionTest/4_localization/DWA_local_global.mp4 '''DWA ''']'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/connectionTest/4_localization/Histogram_local_global.mp4?ref_type=heads local + global planner practical demonstration]'''
 
Initialy the robot follows the right path and nicely goes around the first corner. After that, the large distance creates a coordinate displacement causing the robot to overschoot close to the wall. Notice in that moment the DWA local navigator recognizes the wall, slows down and slowly approaches the path point to a satisfying distance. After that is continues to follow the path, where after the corner the robot overshoots again due to missplacement and retries to reach the path point. This misses into the chairs what the local navigator finds hard to recognize. There the video ends.
 
== '''Week 4 & 5: Localization''' ==
 
=== Assignment 0.1 Explore the code framework ===
'''How is the code is structured?'''
 
The code for this assignment is structures as multiple classes. Each object or component has its own class and functions that can be used to change aspects on about itself. 
 
'''What is the difference between the ParticleFilter and ParticleFilterBase classes, and how are they related to each'''
 
'''other?'''
 
The particle filter base provides the base functionalities of a particle filter, like setting noise levels and returning position lists. The particle filter script provides the larger functions and connects the particle filter to other classes that need to work with the same data, like the resampler. The ParticleFilter runs the system on a high level by calling more basic functions form ParticleFilterBase.
 
'''How are the ParticleFilter and Particle class related to eachother?'''
 
The ParticleFilter creates the Particles that are used in the particle filter. When operations need to be done on an individual particle the ParticleFilter calls the functions inside the Particle class.
 
'''Both the ParticleFilterBase and Particle classes implement a propagation method. What is the difference between the'''
 
'''methods?'''
 
The ParticleFilterBase propagation method ask all individual particles to run their propagation function. The Particle's propagation does the actual propagation. The ParticleFilterBase function just provides a single command to let all these particles perform their operations.
 
=== Assignment 1.1 Initialize the Particle Filter ===
[[File:Initialization of the particle filter.png|thumb|Initialization of particle filter]]
The first constructor places random particles around the robot with an equal radial probability. Each particle has an equal chance of being positioned anywhere around the robot, with completely random poses. This method is particularly useful when the robot’s initial position is unknown and there is no information about its location on the global map. By distributing particles randomly, the robot maximizes its likelihood of locating itself within the environment. The key advantages of this approach include wide coverage and flexibility, making it ideal for scenarios where the robot's starting position is completely unknown. However, the disadvantages are lower precision due to the wide spread of particles and higher computational demands, as more particles are needed to achieve accurate localization.
 
In contrast, when the robot’s position and orientation are known, the second constructor is more effective. This method initializes particles with a higher probability of being aligned with the robot's known state, placing more particles in front of the robot and orienting them similarly. This concentrated placement allows for more accurate corrections of the robot’s position and orientation as it moves forward. The advantages of this approach include higher precision and efficiency, as fewer particles are needed for accurate localization. However, it has the drawback of limited coverage, making it less effective if the initial guess of the robot’s position is incorrect. Additionally, it requires some prior knowledge of the robot’s initial position and orientation. This constructor is best used when the robot has a general idea of its starting location and direction.[[File:Assignment1 2 localization green arrow2.png|thumb|228x228px|Particle cloud with green arrow average]]
 
=== Assignment 1.2 Calculate the pose estimate ===
The filter takes a weighted average over all particles to estimate its state; the position and orientation of the robot. The particles that are in line with the sensor data receive a larger weight. This way, the particle swarm adjusts itself to the sensor data. Over time, the pose estimated by the particle swarm is getting in line with the actual pose of the robot. However, there remains a discrepancy in pose. This is caused by the limited amount of particles, disallowing a complete representation of the pose. Iterating over the particle swarm with noise will help improve the estimated pose of the robot over time. 
 
When the sensor data provides insufficient feedback, for example when the robot is in open space, the pose estimate is impossible to predict accurately. For this, more information on the surroundings of the robot is required. 
 
=== Assignment 1.3 Propagate the particles with odometry ===
'''Why do we need to inject noise into the propagation when the received odometry information already has an unknown noise component?'''
 
The noise of the odometry data is unknown. By manually adding noise to the odometry data, we can better represent uncertainties in the robot's movement model. This helps in capturing deviations from the expected path. Additionally, it is important to keep refreshing the particles to have a broad coverage of the environment. If we would not, we risk that not enough exploration of the surrounding causes or particles to converge to an incorrect state. Which makes it harder to correct the robot's real location and orientation. 
 
'''What happens when we stop here, and do not incorporate a correction step?'''
 
When we start moving we need to keep adjusting for the environment. New sensor data must be continuously fed to the filter to approximate its state. If this is not done, we could not readjust our position and correct for odometry errors accumulated over time. If we are stationary, the filter's particles would also diverge from their true state over time. If we do not keep adding correction steps, we would simply accumulate errors over time. Noise and inaccuracies would over time cause increasingly worse state estimates.
 
In the following link the resulting moving particle cloud is shown:
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week4/Captures/moving_particle_cloud.webm Simulation Localization Moving particle cloud]''' 
 
=== Assignment 2.1 Correct the particles with LiDAR ===
'''What does each of the component of the measurement model represent, and why is each necessary.'''
 
The measurement model uses data, prediction and some additional measurement model data named lm. The data is the actual measurement acquired from the LiDAR of the robot. The prediction is the measurement the raycast algorithm provides and what the data would be if the robot were to be at the exact position of the particle.
 
The likelihood of the particle being the correct position is called likelihood en is represented by four components:
 
First, A probability of measuring the expected prediction. or in other words if the particle is right this part will give a high contribution to the total chance.
 
Secondly, there is a chance the LiDAR has an error or the measurements happens to reflect of something and does not return properly. To accommodate the chance of this happening a component is added that represents a chance of getting a maximum value as a reading. 
 
Third, there is always a chance of an especially large dust particle or some other form of interference to block the measurement. This chance is modeled by an exponential decay from zero untill the predicted measurement.
 
Lastly, there is always a chance an error in some system replaces a measurement with a random value. Therefore there is a small probability added across the entire possible range of measurement values.
 
 
 
'''With each particle having N >> 1 rays, and each likelihood being ∈ [0, 1] , where could you see an issue given our''' '''current implementation of the likelihood computation.'''
 
All likelihoods per particle are summed up and normalized with the amount of beams that are used to calculate the system. This puts the value between 0 and 1 just like the individual likelihoods. An issue could arise when components of the measurement model are not weighted properly and would not give a decisive difference.  Additionally when adding the likelihoods of all the particles together careful consideration of the weights of the short component. Setting it too high would let the a particle close to a wall get a big sum of small short contributions which might overshadow some position that gets some hits correct. 
 
=== Assignment 2.2 Re-sample the particles ===
'''What are the benefits and disadvantages of both the multinomial resampling and the stratified resampling?'''
 
The multinomial sampling method always has a chance that the new particles are biased to an example particle due to the randomness with which the new particles are divided over the possible example particles. The stratified method takes away this randomness by dividing the new particles equally but weighted over the example particles.
 
 
'''With each particle having N>> 1 rays, and each likelihood being ∈ [0, 1] , where could you see an issue given our current implementation of the likelihood computation?'''
 
A high amount of particles with a high amount of beams will need a lot of computation power. This might not be available and slow down the robot. 
 
=== Assignment 2.3 Test on the physical setup ===
While testing on the physical setup in the last session the algorithm was not fully working. The day after the testing on the physical setup the algorith does work. Therefore, we have chosen to show the simulation videos instead. Reducing the number of particles significantly ensures that the code is running smoothly within the desired timestep. The other parameters of the algorithm do have some effect on the algorithm but the simulation does not improve very significantly by tweaking these parameters from the default. This might change when the algorithm is also tested on the physical setup.
 
==== Multinominal resampling ====
The number of particles are first set to 2500 where the simulation is unable to keep up with the framerate of 5 Hz. The results in the particles lagging behind as shown in the simulation video. When the robot stops again it is clear that the particles are following the robot quite well but this is not very feasible for the real setup.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week4/Captures/multinominal_resampling_2500particles.webm Simulation Localization Multinominal 2500 Particles]'''
 
Reducing the number of particles to 300 results in the simulation being able to follow the robot much better. The video of multinominal resampling shows that the green arrow is closely following the robot and is at most one timestep behind. When turning at its position the arrow does lag behind a timestep which seemed to improve a little bit by tuning the propogation parameters.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week4/Captures/multinominal_resampling_2500particles.webm Simulation Localization Multinominal 300 Particles]'''
 
==== Stratified resampling ====
Similar results are found using stratified resampling. With 2500 particles, the simulation struggles to maintain a 5 Hz framerate, causing the particles to lag behind the robot's movements. However, when the robot stops, the particles accurately represent the robot's position.
 
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week4/Captures/Simulation%20Localization%20Stratified%202500%20Particles.webm '''Simulation Localization Stratified 2500 Particles''']
 
Reducing the number of particles to 300 significantly improves simulation performance, as seen with multinomial resampling. The green arrow, representing the robot's estimated position, follows the robot more closely and consistently.
 
[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week4/Captures/Simulation%20Localization%20Stratified%20300%20Particles.webm '''Simulation Localization Stratified 300 Particles''']
 
==== Localization for final assignment ====
The algorithm works fast and accurate such that the estimated position follows closely to the robot. In the final assignment the script is run at 10 Hz instead of 5 Hz which means that the number of particles has to be reduced a bit more or the script has to be made more efficient. We still have to test the algorithm with unmodeled obstacles in the practical setup but it shows promising results in the simulation.

Latest revision as of 21:45, 7 June 2024

Group members:

Caption
Name student ID
Marten de Klein 1415425
Ruben van de Guchte 1504584
Vincent Hoffmann 1897721
Adis Husanović 1461915
Lysander Herrewijn 1352261
Timo van der Stokker 1228489

Week 1: The art of not crashing

Think of a method to make the robot drive forward but stop before it hits something.

These exercises were performed individually by the members leading to different outcomes.

There were a lot of similarities between the answers. Every group member used the laser data to determine if objects were close. All members invented a way to loop over the laser range data and check the individual values to see whether that value was lower than a safety threshold. If an object came close enough to the robot, in all cases a signal would be sent to the motor to stop moving forward. However, the way members determined whether an object was in front differed per member;

Ruben decided to not loop over all laser ranges but only check the ones in front of the robot. To determine which laser data actually represents the area the robot is going to drive to, a geometric calculation is executed, using the arc tangent of the required safety distance and the width of the robot to determine the maximum angle the laser data needs to check. After checking whether objects are not too close, it adds a geometric term to make sure the safety distance is consistent when looking at a line parallel to the driving direction. So the beams to check in front of the robot is not shaped like a circle due to the LiDAR.

Lysander made his robot start turning when an object was detected. In the next loop the robot would therefore get different laser scan data and after a few loops the object might be outside the angles the laser scanner can check and so it will drive forward again as it has turned away from the obstacle.

  1. For all members the laser data was read.
  2. A loop was created over the data to either pick out the smallest laser scan distance.
    • In Ruben's case, the angles of the laser data were also used to select which specific laser beams should be included when determining it is safe enough to drive into a certain direction.
  3. If one out of the (selected) beams measured a point smaller than the safety distance the robot stops.

Run your simulation on two maps, one containing a large block in front of the robot, the second containing a block the robot can pass by safely when driving straight.

Name Code Video exercise 1 Video exercise 2
Lysander Herrewijn Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Adis Husanovic Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Marten de Klein Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Ruben van de Guchte Code Screen capture exercise 1 Screen capture exercise 2
Vincent Hoffmann Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Timo van der Stokker Code Screen capture exercise 1 Screen capture exercise 2 Map 1 Screen capture exercise 2 Map 2

All runs stop in time in front of the walls for exercise 1 and for map 1 in the second exercise. However, for the second map, only Marten's, Adis' and Ruben's code caused the robot to drive by the obstacle as it was safe. Adis and Marten implemented a safety distance small enough such that the robot drove past the obstacle. Ruben's method is more dynamic and suitable in a wider range of cases. It it also better at looking at upcoming obstacles further away. The direction in which the robot is not allowed to drive if a laser point is closer than its safety distance becomes smaller as the robot gets closer to a gap. We deemed this preferrable, as the robot should be able to navigate smaller corridors. So, a more dynamic approach should be implemented for the local navigator.

Practical demonstration

The laser received less noise than we expected, it is fairly accurate with its measurements. However, only items at a certain height of the laser can be seen, as the laser only works on its own height. For example, when standing in front of the robot, the laser could only detect our shins as two half circles (the front of the legs).

When testing our don't crash files on the robot, it stood out that the stopping distance needed to include the distance from the measuring point (the LiDAR) to edge of the robot. This was measured to be approximately 10 cm. After changing this the robot was first tested on a barrier as seen in Robot stopping at barrier.

Next, we let a person walk in front of it to see if the code would still work. Fortunately, it did, as can be seen in Robot stopping at passing person.

Finally we tested Lysander's code that turns the robot when it sees an obstacle, and then continues. This gives an indication of the blind spot of the robot. As it will only drive if everything it sees within its vision range is not within its safety distance. This can be seen in Robot stopping and turning at feet.

Week 2: Local Navigation

A theoretical view of the VFH technique, the numbers are made up and only for illustration purposes. The robot counts the amount of beams that hit an objects within its LiDAR in a certain angular bracket. The range of angles is what we have approximated based on our tests [-110°, 110°].

Vector field histogram (VFH)

Implementation:

The simplified vector field histogram approach was initially implemented as follows.

The robot starts out with a goal retrieved from the global navigation function, its laser data is received from the lidar and its own position, which it keeps track of internally. A localization function will eventually be added to enhance its ability to track its own position. The laser data points are grouped together in evenly spaced brackets in polar angles from the robot. For the individual brackets, the algorithm checks how many LiDAR points' distance is smaller than a specified safety threshold distance. It then saves the number of laser data points that are closer than the safety distance for every bracket. This is indicated by the blue bars in the plot to the right.

Next it calculates the direction of the goal by computing the angle between its own position and the goal position. It then checks whether the angle towards the goal is unoccupied by checking the values of the bracket corresponding to that angle. A bracket is unoccupied if the amount of LiDAR hits within the safety distance are fewer than the threshold value. Additionally, some extra neighboring brackets are checked depending on a safety width hyperparameter. The goas of the safety width is to make sure the gap is wide enough for the robot to fit through. If the direction towards the goal is occupied, the code will check the brackets to the left and to the right and save the closest unoccupied angle at either side. It then picks whichever angle is smaller, left or right. and sets that angle as its new goal. If we look at our example histogram, we can see that the forward direction to the goal is open. However, depending on the amount of safety brackets, its behavior might change. In our case, we use 100 brackets as is shown in the example histogram. In some cases, we have used about 5 safety brackets. In the example case, the robot would steer slightly to the positive angle direction (to the right) to avoid the first brackets left from x=0 above the safety threshold.

Afterwards it compares its own angle with the goal angle and drives forwards if it aligns within a small angular margin towards the direction of the goal. The global navigator keeps track of the position of the robot and updates the goal if the robot is close enough to its current goal.

This initial implementation had some oversights and edge cases that we came across when testing using the simulator and the real robot. The first issue arose when the system was tested to turn a corner after driving next to an obstacle. At this small distance the cone of laser data that needs to be checked is very large. Though if this cone were used to look at larger distances, the robot would discard routes it could easily fit through. This was fixed by performing a second check on the laser data at the side the robot is turning to and if this check finds an object moving forward a bit more before turning. This solved our problem where the robot would steer into objects or walls too early.

Advantages:

  • Implementing VFH for navigation is relatively straightforward, requiring basic processing of LiDAR data to compute the histogram of obstacles' directions.
  • VFH can generate smooth and collision-free paths for the robot by considering both obstacle avoidance and goal-reaching objectives
  • VFH is computationally efficient and robust to noisy sensor data

Disadvantages:

  • VFH may have difficulty distinguishing overlapping obstacles, especially if they are close together and occupy similar angular regions in the LiDAR's field of view. This can be solved by taking a small distance at which LiDAR points are taken into account.
  • In complex environments with narrow passages or dense clutter, VFH may struggle to find feasible paths due to the limited information provided by the LiDAR sensor and the simplicity of the VFH algorithm.
  • VFH performance can be sensitive to parameter settings such as the size of the histogram bins or the threshold for obstacle detection. Tuning these parameters for optimal performance may require extensive experimentation.

Possible failure scenarios:

  • A possible failure scenario encountered in testing was that the robots orientation deviated so far from the goal that it became unable to directly check the direction the goal was in. So, if the robot has to turn so far that the goal is in the 'blindspot' of 360-220=140 degrees, the robot would assume there is an available path in that direction and rotate towards it. After rotating that direction it would see that it is occupied and turn the other way towards space it could move to and this cycle would repeat.
    • Solution: This problem could be solved by implementing an internal representation of the points that were detected before as was done in Borenstein, J., & Koren, Y. (1991). In our implementation we instead chose to solve this by making sure there are no such challenges in the path by choosing more intermediate goal points in the global navigator.
  • Another possible failure scenario presents itself when the robot needs to drive more than 90 degrees away from its goal. In this case driving away actually puts the robot further from its goal and as it has no memory will forget that there is an obstacle that caused it to need to turn this way in the first place. And after driving such that the obstacle exits its detection range, it will try to turn back. This will cause the robot to cause its behavior to loop between driving away and towards the obstacle.
    • Solution: We again avoid this failure case by only letting the local navigator handle simple paths towards intermediate goals.

Implementation of local and global algorithms:

The global and local planner can be easily connected via a main script. In the main script the global planner is called first generating a set of target goals. Then the local planner is called and only given the first target goal. The main script keeps track of the distance from the robot to the target goal and if this distance is smaller than a given value the local planner is given the next target goal until it reaches the final target goal and the main velocity is set to zero. In case a target goal is not reachable (e.g. the distance to the target goal does not reduce for a given time) the global planner should be called again with an updated map.

Demonstrations:

Simulation:

The simulations show how the robot navigates through the maps. There are a few things to notice. Firstly, the robot is not very dynamic, it can only turn when standing still. If the direction the robot is facing is safe it will drive forward. Otherwise it will turn. In the case the VFH will be used in the final challenge, a distinction must be made such that the robot does drive forward during turning if no object is directly in front of it. Furthermore, we can see how the VFH finds local minima and manages to escape it by turning around and exploring the other side. This happens, for example, in map 3 after turning past the first corner. The settings are such, that the robot can pass the small corridor on map 4 after the first corner. As we selected only a few safety brackets to pass this corridor, we can see that the robot cuts corner fairly tight in the simulation. For the real life scenario, this should be adjusted to attain a bigger safety margin around corners.

Simulation Demonstration Video MAP3 VFH

Simulation Demonstration Video MAP4 VFH

Practical demonstration:

The first obstacle show the robot nicely passing an obstacle relatively tightly. It moves on and does not stop and turn, knowing it will make it. For the second object, he stops, turns, drives and repeats this a few times before he passes the corner. This is an example when driving forward in combination with turning could give a more smooth and also faster result. All in all, the VFH has proven to be a reliable method in avoiding obstacles.

Experimental Demonstration Video VFH


Dynamic Window Approach (DWA)

DWA Explanation.png

Implementation:

The DWA algorithm is a velocity based algorithm based on maximizing an objective function. The general overview is shown in the figure to the right.

The algorithm starts by defining the velocity and acceleration constraints, such as velocity and acceleration limits, bracking accelerations and safety distances. It then creates a set of possible velocities, creating a grid from minimum to maximum allowable velocities in discrete steps. Then, the velocities that are not reachable within a single time step, based on the current velocity and acceleration limits, are filtered out. Each remaining velocity in the set is checked for potential collisions with obstacles detected by the laser sensors. If the combination of the forward and angular velocity would result in a collision, it is removed from the set. This is done by calculating the distance to obstacles and comparing it with the robot’s stopping distance. The distance to an abstacle is calculated based on the curved trajectory it will follow. Is the velocity only forward and not rotational then the distance is simply calculated based on the shortest distance.

A pointing system is used in order to determine which combination of the forward and angular velocity suits the best at each step. For each feasible velocity, a score is computed based on three criteria in objective function. The first one is the heading towards the goal, the second one is the forward velocity, and the third one is the distance to the nearest obstacle. The weights for these criteria are defined to balance their importance where each individual criteria is first normalised. The weights are tuned based on the performance in simulation and experiments.

Finally, the robot’s velocity is updated with the highest-scoring velocity, resulting in the robot to move in a safe and smooth manner.


Advantages:

  • Effective at avoiding obstacles detected by the LiDAR sensor in real-time. It dynamically adjusts the robot's velocity and heading to navigate around obstacles while aiming to reach its goal.
  • Focuses on local planning, considering only nearby obstacles and the robot's dynamics when generating trajectories. This enables the robot to react quickly to changes in the environment without requiring a global map.

Disadvantages:

  • Can get stuck in local minima, where the robot is unable to find a feasible trajectory to its goal due to obstacles blocking its path. This can occur in highly cluttered environments or when the goal is located in a narrow passage.
  • Does not always find the optimal path to the goal, especially in environments with complex structures or long-range dependencies.
  • The performance can be sensitive to the choice of parameters, such as the size of the dynamic window or the velocity and acceleration limits of the robot. Tuning these parameters can be challenging and may require empirical testing.

Possible failure scenarios:

  • A possible scenario is a target position inside a wall. In this case the robot will never reach the target position because the required velocity to reach the target will be ommitted from the velocity set. Instead it will either stop as close as possible to the target position when no velocities are feasible anymore due to collision risk or the robot will keep moving around the target position trying to minimise the heading angle.
    • Solution: ensuring that a target position is never inside a wall could be a method, however this only works for known obstacles. If the target position is part of a set of target positions the robot could try to skip this target position and immediately go to the next position, with the risk of getting stuck in a local minima. The best solution is giving the global planner a signal with an updated map and letting it create a new set of target positions.
  • A second possible scenario is that the target position is inside the radius of curvature of the robot resulting in a circling motion of the robot around the target without ever reaching the target.
    • Solution: this scenario can be fixed by tuning the parameters of the algorithm. Ensuring that the heading angle is more important that the forward velocity is a method of avoiding getting stuck in a radial loop. However, when the target is still far away this is less desired because obstacle avoidance works better with a high score on the forward velocity. A solution would be to create a somewhat dynamic parameter setting where if the robot is located far away from the target it uses different values for the tuning parameters compared to when it is close to the target.
  • A third possible scenario is that odometry data is not correct resulting in a wrong calculation of the desired heading angle. This would result in the robot targetting a different goal location.
    • Solution: A solution is to reduce the acceleration values to reduce the robot slipping. This makes the robot slightly slower but better controlled. Another solution which will be implemented later will be the addition of a localization algorithm that ensures that the robot has a higher positional accuracy.

Linking global and local planner:

The global and local planner can be easily connected via a main script. In the main script the global planner is called first generating a set of target goals. Then the local planner is called and only given the first target goal. The main script keeps track of the distance from the robot to the target goal and if this distance is smaller than a given value the local planner is given the next target goal until it reaches the final target goal and the main velocity is set to zero. In case a target goal is not reachable (e.g. the distance to the target goal does not reduce for a given time) the global planner should be called again with an updated map.

Demonstrations:

Simulation:

In the first video, the robot moves away from the first obstacle to the right. Then it turns back left before it has passed the obstacle. This has to do with the method of only looking in a cone in front of itself and it first does not see the walls beside himself since the hallway is quite wide. When the robot gets closer to the obstacle it again turns right and passes it with a safe distance between itself and the obstacle. Then it rotates itself to the opening and drives in a smooth motion to its final destination.

In the second video, the robot smoothly moves away from its first obstacle. Then it turns back left to move away from the second obstacle. After the second obstacle it orients itself back to the goal and chooses to go around the third obstacle on the right. In some simulations it chose the other direction, this is purely based on where it sees an opening first. Overal a safe distance is held and the obstacles are already taken into account from quite far away.

Simulation Demonstration Video MAP2 DWA

Simulation Demonstration Video MAP3 DWA

Practical demonstration:

In the video, the robot smoothly moves towards its goal, adjusting its path as needed when it encounters obstacles. At the start, the robot moves towards the goal until it meets a wall, then it turns in the right direction to avoid the obstacle. After turning, it keeps moving parallel to the wall. Approaching a corner formed by two walls, the robot slows down, ensuring it has enough space to make a left turn. There is enough space available for the robot to fit through the left gap so it turns there. After turning left, the robot again moves forward parallel to the wall while keeping a safe distance from the adjacent wall. When it completed it last turn around the wall, the robot arrived at its destination accurately enough and stopped smoothly.

Experimental Demonstration Video DWA

Week 3: Global Navigation

The goal of this project is to reach a certain destination. In order to reach the destination, a plan must be made which the robot will follow. This larger scale plan is called the global plan. This plan is made through the global planner. The global planner that is used throught this project is A* which will find the most optimal path for a graph. To create the graph on which a path is planned, the function Probablistic Road Map (PRM) is used. This function generates a random generated graph for a map. So, by combining these functions a optimal path can be generated for a given map.

Efficiently placed nodes

How could finding the shortest path through the maze using the A* algorithm be made more efficient by placing the nodes differently? Sketch the small maze with the proposed nodes and the connections between them. Why would this be more efficient?

The approach of placing nodes only at locations where the robot can change direction, known as "corner-cutting" or "graph-pruning," is an efficient way to implement the A* algorithm for finding the shortest path through a maze. By reducing the number of nodes and connections that need to be explored, this method significantly reduces the computational complexity and search space. It eliminates redundant paths, simplifies connectivity representation, improves memory efficiency, and generates smoother paths by allowing the robot to move directly between corner nodes. The efficiently place nodes can be seen in the figure on the right.

Would implementing PRM in a map like the maze be efficient?

PRM works best in open spaces as it generates random points. In a maze PRM likely does not generate points at the turning points. Additionally, PRM is not ideal for narrow hallways for points are just as likely to be generated near walls. Causing unwanted wiggle movements by the robot as is moves from point to point. Furthermore, placing a point within a narrow hallway is very difficult for many missplaced points will likely happen before PRM finally finds the specific coordinate where a point can be placed. So, PRM will fail a lot in trying to place points in hallways, therefore, being a less ideal. As for the maze is basically multiple narrow hallways, the PRM algorithms will be much less efficient.

What would change if the map suddenly changes (e.g. the map gets updated)?

If the map suddenly changes, meaning that after the initial path is made the map gets updated, the robot will keep following the path that it initially obtained from the A* algorithm using the original map. Meaning it will try to keep following the points the global path has determined, but still try to avoid obstacles with the local navigator. So, the local navigator tries to reach a distance of 0.2 meters away from the points determined by the glocal path while avoiding obstacles. As the global path is outdated, the eventual path the local navigator takes will probably end up being stuck in a local minima in an unexpected corner for example.

How did you connect the local and global planner?

The global and local planner are connected via a main script. In the main script the global planner is called first generating a set of points using PRM, then A* is used to determine target goals in the set of points. Then the local planner is called and only given the first target goal. The main script keeps track of the distance from the robot to the target goal and if this distance is smaller than a given value the local planner is given the next target goal until it reaches the final target goal and the main velocity is set to zero. This implementation still assumes that the global planner knows the map quite well and that there are no stationary unknown obstacles which hinder the path of the robot.

Test the combination of your local and global planner for a longer period of time on the real robot. What do you see that happens in terms of the calculated position of the robot? What is a way to solve this?

The misalignment between the real robot's position and its calculated position is caused by factors like wheel slippage and odometry inaccuracies. This leads to navigation errors. Localization techniques, such as particle filters, estimate the robot's actual position within the global map by updating pose hypotheses based on sensor data. Integrating localization with planners improves navigation accuracy, enables path planning from the correct position, and allows recovery from accidental displacements.

Node map when using PRM

Run the A* algorithm using the gridmap (with the provided nodelist) and using the PRM. What do you observe? Comment on the advantage of using PRM in an open space.

The observed difference between the routes generated by the A* algorithm using a gridmap representation versus a probabilistic roadmap (PRM) is notable. With the gridmap approach, the robot's path is constrained to horizontal and vertical movements along the grid cells, resulting in a stair-step pattern. While this ensures obstacle avoidance, it can lead to longer and less efficient paths, particularly in open spaces or environments with sparse obstacles.

In contrast, the PRM approach offers a significant advantage. By representing the environment as a graph with nodes denoting collision-free configurations and edges representing feasible paths between them, the A* algorithm can find more direct and diagonal routes through the roadmap. This allows the robot to take shortcuts through open areas, minimizing unnecessary turns or detours. Consequently, the paths generated by the PRM approach are typically shorter and have fewer intermediate waypoints to follow, simplifying navigation and control. Additionally, when generating the node map for PRM, the walls are also inflated to avoid nodes being placed too close to walls. The advantage of the PRM is particularly apparent in open spaces, where it enables the robot to take the most direct routes possible, leading to faster navigation, reduced energy consumption, and potentially smoother trajectories. The comparison can be seen in the figure on the right.

Comparison between using a grid map or using a probabilistic road map for the A* algorithm

Simulation results

VFH local + global planner simulation

The VFH is a strong contender for the final challenge due to its strong obstacle avoidance. Incorporating VFH with the global planner was easily initially. However, it turned out there were still major issues. The robot would traverse the corridor well, and the center of the map built in by walls, but it would not correctly follow the global planner. The local planner could be created such that it would drive to the goal just by avoiding walls.

The reported simulation is a debugged version where we can see the VFH local planner nicely follows the nodes of the global planner, which switches to the next node when the robot is closer than 20cm to the node. However, it is not completely debugged yet. Due to an implementation of the arctangent function (atan) the desired angle would flip 2 pi and the robot would turn completely before driving onward. This happens when position_x-goal_x>0 flips to position_x-goal_x<0, or for the y-direction. We will attempt to fix this before the final challenge. If this is fixed, the goal is to add a script that allows the robot to drive forward and turn when no objects are in front of it. Making its trajectory more smooth. It does eventually reach the goal in the simulation. The practical demonstration shows an older version where the goal is incorrect due to similar mistakes in the calculation of the goal and its position.

DWA local + global planner simulation

When used in the simulation, it can be observed that the DWA effectively guides the robot to follow the globally planned path while smoothly navigating around obstacles and avoiding collisions with walls. As the robot approaches a wall or obstacle, the DWA algorithm continuously evaluates the robot's velocity and steering commands within a dynamic window of admissible values.

By continuously reevaluating the robot's commands and choosing the optimal velocity and steering angle, the DWA ensures that the robot slows down and adjusts its trajectory when approaching obstacles or walls. This behavior can be observed in the video, where the robot smoothly decelerates and turns away from walls as it gets closer to them. The DWA's ability to adapt the robot's motion in real-time based on its current state and surroundings allows for safe and efficient navigation, even in cluttered or dynamic environments.

Practical demonstration

VFH local + global planner practical demonstration

The issue where the robot thinks it has reached its destination halfway through the path is likely caused by an incorrect implementation of the arctangent function (atan) used for calculating the robot's orientation or heading. However, if this function is not implemented correctly, particularly in handling the signs of the displacements or the quadrant mapping, it can result in the calculated angle flipping or being off by 180 degrees (or any other multiple of pi radians).

This error in angle calculation can propagate through the robot's motion model, causing the robot to think it is facing the opposite direction or at a completely different location on the map than its actual position. For example, if the angle calculation flips by 180 degrees midway along a straight path, the robot's perceived orientation will be inverted, and it may interpret its current position as the endpoint or destination, even though it has only traversed half of the intended path. The simulation shows a debugged version of the same VFH method. Earlier simulation attempts moved the robot to exactly the same endpoint in the simulation as for the world map test.

DWA local + global planner practical demonstration

Initialy the robot follows the right path and nicely goes around the first corner. After that, the large distance creates a coordinate displacement causing the robot to overschoot close to the wall. Notice in that moment the DWA local navigator recognizes the wall, slows down and slowly approaches the path point to a satisfying distance. After that is continues to follow the path, where after the corner the robot overshoots again due to missplacement and retries to reach the path point. This misses into the chairs what the local navigator finds hard to recognize. There the video ends.

Week 4 & 5: Localization

Assignment 0.1 Explore the code framework

How is the code is structured?

The code for this assignment is structures as multiple classes. Each object or component has its own class and functions that can be used to change aspects on about itself.

What is the difference between the ParticleFilter and ParticleFilterBase classes, and how are they related to each

other?

The particle filter base provides the base functionalities of a particle filter, like setting noise levels and returning position lists. The particle filter script provides the larger functions and connects the particle filter to other classes that need to work with the same data, like the resampler. The ParticleFilter runs the system on a high level by calling more basic functions form ParticleFilterBase.

How are the ParticleFilter and Particle class related to eachother?

The ParticleFilter creates the Particles that are used in the particle filter. When operations need to be done on an individual particle the ParticleFilter calls the functions inside the Particle class.

Both the ParticleFilterBase and Particle classes implement a propagation method. What is the difference between the

methods?

The ParticleFilterBase propagation method ask all individual particles to run their propagation function. The Particle's propagation does the actual propagation. The ParticleFilterBase function just provides a single command to let all these particles perform their operations.

Assignment 1.1 Initialize the Particle Filter

Initialization of particle filter

The first constructor places random particles around the robot with an equal radial probability. Each particle has an equal chance of being positioned anywhere around the robot, with completely random poses. This method is particularly useful when the robot’s initial position is unknown and there is no information about its location on the global map. By distributing particles randomly, the robot maximizes its likelihood of locating itself within the environment. The key advantages of this approach include wide coverage and flexibility, making it ideal for scenarios where the robot's starting position is completely unknown. However, the disadvantages are lower precision due to the wide spread of particles and higher computational demands, as more particles are needed to achieve accurate localization.

In contrast, when the robot’s position and orientation are known, the second constructor is more effective. This method initializes particles with a higher probability of being aligned with the robot's known state, placing more particles in front of the robot and orienting them similarly. This concentrated placement allows for more accurate corrections of the robot’s position and orientation as it moves forward. The advantages of this approach include higher precision and efficiency, as fewer particles are needed for accurate localization. However, it has the drawback of limited coverage, making it less effective if the initial guess of the robot’s position is incorrect. Additionally, it requires some prior knowledge of the robot’s initial position and orientation. This constructor is best used when the robot has a general idea of its starting location and direction.

Particle cloud with green arrow average

Assignment 1.2 Calculate the pose estimate

The filter takes a weighted average over all particles to estimate its state; the position and orientation of the robot. The particles that are in line with the sensor data receive a larger weight. This way, the particle swarm adjusts itself to the sensor data. Over time, the pose estimated by the particle swarm is getting in line with the actual pose of the robot. However, there remains a discrepancy in pose. This is caused by the limited amount of particles, disallowing a complete representation of the pose. Iterating over the particle swarm with noise will help improve the estimated pose of the robot over time.

When the sensor data provides insufficient feedback, for example when the robot is in open space, the pose estimate is impossible to predict accurately. For this, more information on the surroundings of the robot is required.

Assignment 1.3 Propagate the particles with odometry

Why do we need to inject noise into the propagation when the received odometry information already has an unknown noise component?

The noise of the odometry data is unknown. By manually adding noise to the odometry data, we can better represent uncertainties in the robot's movement model. This helps in capturing deviations from the expected path. Additionally, it is important to keep refreshing the particles to have a broad coverage of the environment. If we would not, we risk that not enough exploration of the surrounding causes or particles to converge to an incorrect state. Which makes it harder to correct the robot's real location and orientation.

What happens when we stop here, and do not incorporate a correction step?

When we start moving we need to keep adjusting for the environment. New sensor data must be continuously fed to the filter to approximate its state. If this is not done, we could not readjust our position and correct for odometry errors accumulated over time. If we are stationary, the filter's particles would also diverge from their true state over time. If we do not keep adding correction steps, we would simply accumulate errors over time. Noise and inaccuracies would over time cause increasingly worse state estimates.

In the following link the resulting moving particle cloud is shown:

Simulation Localization Moving particle cloud

Assignment 2.1 Correct the particles with LiDAR

What does each of the component of the measurement model represent, and why is each necessary.

The measurement model uses data, prediction and some additional measurement model data named lm. The data is the actual measurement acquired from the LiDAR of the robot. The prediction is the measurement the raycast algorithm provides and what the data would be if the robot were to be at the exact position of the particle.

The likelihood of the particle being the correct position is called likelihood en is represented by four components:

First, A probability of measuring the expected prediction. or in other words if the particle is right this part will give a high contribution to the total chance.

Secondly, there is a chance the LiDAR has an error or the measurements happens to reflect of something and does not return properly. To accommodate the chance of this happening a component is added that represents a chance of getting a maximum value as a reading.

Third, there is always a chance of an especially large dust particle or some other form of interference to block the measurement. This chance is modeled by an exponential decay from zero untill the predicted measurement.

Lastly, there is always a chance an error in some system replaces a measurement with a random value. Therefore there is a small probability added across the entire possible range of measurement values.


With each particle having N >> 1 rays, and each likelihood being ∈ [0, 1] , where could you see an issue given our current implementation of the likelihood computation.

All likelihoods per particle are summed up and normalized with the amount of beams that are used to calculate the system. This puts the value between 0 and 1 just like the individual likelihoods. An issue could arise when components of the measurement model are not weighted properly and would not give a decisive difference. Additionally when adding the likelihoods of all the particles together careful consideration of the weights of the short component. Setting it too high would let the a particle close to a wall get a big sum of small short contributions which might overshadow some position that gets some hits correct.

Assignment 2.2 Re-sample the particles

What are the benefits and disadvantages of both the multinomial resampling and the stratified resampling?

The multinomial sampling method always has a chance that the new particles are biased to an example particle due to the randomness with which the new particles are divided over the possible example particles. The stratified method takes away this randomness by dividing the new particles equally but weighted over the example particles.


With each particle having N>> 1 rays, and each likelihood being ∈ [0, 1] , where could you see an issue given our current implementation of the likelihood computation?

A high amount of particles with a high amount of beams will need a lot of computation power. This might not be available and slow down the robot.

Assignment 2.3 Test on the physical setup

While testing on the physical setup in the last session the algorithm was not fully working. The day after the testing on the physical setup the algorith does work. Therefore, we have chosen to show the simulation videos instead. Reducing the number of particles significantly ensures that the code is running smoothly within the desired timestep. The other parameters of the algorithm do have some effect on the algorithm but the simulation does not improve very significantly by tweaking these parameters from the default. This might change when the algorithm is also tested on the physical setup.

Multinominal resampling

The number of particles are first set to 2500 where the simulation is unable to keep up with the framerate of 5 Hz. The results in the particles lagging behind as shown in the simulation video. When the robot stops again it is clear that the particles are following the robot quite well but this is not very feasible for the real setup.

Simulation Localization Multinominal 2500 Particles

Reducing the number of particles to 300 results in the simulation being able to follow the robot much better. The video of multinominal resampling shows that the green arrow is closely following the robot and is at most one timestep behind. When turning at its position the arrow does lag behind a timestep which seemed to improve a little bit by tuning the propogation parameters.

Simulation Localization Multinominal 300 Particles

Stratified resampling

Similar results are found using stratified resampling. With 2500 particles, the simulation struggles to maintain a 5 Hz framerate, causing the particles to lag behind the robot's movements. However, when the robot stops, the particles accurately represent the robot's position.

Simulation Localization Stratified 2500 Particles

Reducing the number of particles to 300 significantly improves simulation performance, as seen with multinomial resampling. The green arrow, representing the robot's estimated position, follows the robot more closely and consistently.

Simulation Localization Stratified 300 Particles

Localization for final assignment

The algorithm works fast and accurate such that the estimated position follows closely to the robot. In the final assignment the script is run at 10 Hz instead of 5 Hz which means that the number of particles has to be reduced a bit more or the script has to be made more efficient. We still have to test the algorithm with unmodeled obstacles in the practical setup but it shows promising results in the simulation.