Mobile Robot Control 2024 The Iron Giant: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(12 intermediate revisions by 4 users not shown)
Line 23: Line 23:
|}
|}


== Week 2 exercises ==
== Week 1: theoretical exercises ==
'''{Setup} Exercise 1, [NAME]:'''
For week one we had to do the following exercises


[link to Gitlab code with comments (We should discuss how we want to format the exercises branch of git.)]
Question 1: Think of a method to make the robot drive forward but stop before it hits something.  


[Design choices, what distance to stop, how does your code retrieve the distance, did you find any limitations?]
Question 2: Run your simulation on two maps, one containing a large block in front of the robot, the second containing a block the robot can pass by safely when driving straight.


[Screen capture of simulation driving and stopping]
Our answers are detailed below.


'''{Setup} Exercise 2, [NAME]:'''


[link to Gitlab screen capture, I guess we want a video to see the robots stopping, lets aim for 20s max per person. I would recommend https://wiki.ubuntuusers.de/recordMyDesktop/, very easy to use (! Note Linux has built in Screen capture software.... use ctrl, alt, shift r to start and stop recording, capture is stored in videos). To make it more compact, convert it to mp4 in you machine and it can then easily be pushed to gitlab. If people feel this is overkill, lets discuss wednesday.][what did you notice?]
'''Exercise 1, Lysander Herrewijn:'''
 
The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning.  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/dontcrash.cpp?ref_type=heads Code]'''
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''
 


{Vergeet dit niet in het Engels te schrijven!}


'''Exercise 2, Lysander Herrewijn:'''


The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall. 


'''Exercise 1, Lysander Herrewijn:'''
In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map1_lysander.mp4?ref_type=heads Screen capture exercise 2 map 1]'''            '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map2_lysander.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
 
 
 
'''Exercise 1, Adis Husanovic:'''
 
The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/main/exercise1_Adis?ref_type=heads Code]'''        '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''
 
 
'''Exercise 2, Adis Husanovic:'''
 
In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.
 
In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1lysander?ref_type=heads Code]'''
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]  [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''


The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''




'''Exercise 2, Lysander Herrewijn:'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise2map1_lysander.mp4?ref_type=heads '''Screen capture exercise 2 map 1''']
'''Exercise 1, Marten de Klein:'''


The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall.  
The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop, which is desired because the robot is not a point but has a width. The code consists of assigning values to speed variables which at the end of the code are send to the robot. The speed variables are first set to a forward velocity and if the laser scanner encounters an object within its safe distance it will set the speed variables to zero.
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/default_map_Marten.mp4?ref_type=heads Screen capture exercise 1] [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/src/dont_crash.cpp?ref_type=heads Code]'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise2map2_lysander.mp4?ref_type=heads '''Screen capture exercise 2 map 1''']


In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.
'''Exercise 2, Marten de Klein:'''


'''Exercise 1, Adis Husanovic:'''
The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/main/exercise1_Adis?ref_type=heads Code]'''
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''        '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''


The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''
'''Exercise 1, Ruben van de Guchte:'''


The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close.  The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and if the robot were going very fast that an unlucky timing might push it within the safety distance. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/dont_crash.cpp?ref_type=heads Code]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]'''


'''Exercise 2, Adis Husanovic:'''
'''Exercise 2, Ruben van de Guchte:'''


In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.
After finetuning the width of the robot it works nicely. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]'''


In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''


'''Exercise 1, Vincent Hoffmann:'''


'''Exercise 1, Marten de Klein:'''
The code uses the laser data to determine the distance from the wall. When the wall is at 0.3 meters the robot stops and returns it has stopped text. Where after the program ends.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/exercises/exercise1_Marten?ref_type=heads Code]'''
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/src/dont_crash.cpp?ref_type=heads Code]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise1_vincent.mp4?ref_type=heads Screen capture exercise 1]'''


The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop.
'''Exercise 2, Vincent Hoffmann:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/default_map_Marten.mp4 Screen capture exercise 1]'''
The robot works well in both cases. The 0.3 meter stop distance causes the robot to stop diagonally away from the wall on the second map, showing the function works in more directions than in front.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map1_vincent.mp4?ref_type=heads Screen capture exercise 2 map 1]'''    '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map2_vincent.mp4?ref_type=heads Screen capture exercise 2 map 2]'''


'''Exercise 2, Marten de Klein:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''


The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.
'''Exercise 1, Timo van der Stokker:'''


With the data taken from the laser, the robot keeps checking the distances to the walls where the lidar is aimed at. If the distance that is found is less than the so called stop distance, the velocity of the robot is set to 0 and therefore stops. The stop distance can be easily be changed by altering the stop_distance variable which is now set to 0.2 meters. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/src/dont_crash.cpp?ref_type=heads Code]''' '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/DefaultMap_Timo.mp4?ref_type=heads Screen capture exercise 1]'''


'''Exercise 1, [Ruben van de Guchte]:'''
'''Exercise 2, Timo van der Stokker:'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/exercises/Exercise1_Ruben?ref_type=heads Code]
The robot works in both the maps with the stop_distance chosen and does not crash.    [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_timo/Screen_captures/Map1_Timo.mp4?ref_type=heads '''Screen capture exercise 2 Map 1''']          '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/Map2_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 2]'''


The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close.  
== '''Practical exercise week 1''' ==
The laser had less noise than we expected, it is fairly accurate with its measurements. However, only items at height of the laser can be seen, as the laser only works on its own height. For example, when standing in front of the robot, the laser could only detect our shins as two half circles.  


The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and ifthe robot were going very fast that an unlucky timing might push it within the safety distance.  
When testing our don't crash files on the robot, it was noticed that the stopping distance needed to include the distance the measuring point is from the edge of the robot. This was measured to be approximately 10 cm. After changing this the robot was first tested on a barrier as seen in [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Practical_exercise1/Stop_barrier.mp4?ref_type=heads '''Robot stopping at barrier''']


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]
Next we let a person walk in front of it to see if the code would still work. Fortunately, it did, as can be seen in [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Practical_exercise1/stop_moving_feet.mp4?ref_type=heads '''Robot stopping at passing person''']


Finally we tested an additional code that turns the robot when it sees an obstacle, and then continues. This can be seen in [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Practical_exercise1/Stop_turn_feet.mp4?ref_type=heads '''Robot stopping and turning at feet''']


'''Exercise 2, [Ruben van de Guchte]:'''


After finetuning the width of the robot it works nicely.
== '''Local Navigation Assignment week 2''' ==


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]
=== '''Vector field histogram (VFH)''' ===
(Explanation of implementation)


Advantages:


'''Exercise 1, Vincent Hoffmann:'''
* Implementing VFH for navigation is relatively straightforward, requiring basic processing of LiDAR data to compute the histogram of obstacles' directions.
* VFH can generate smooth and collision-free paths for the robot by considering both obstacle avoidance and goal-reaching objectives
* VFH is computationally efficient and robust to noisy sensor data


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/commit/a9befe2eb642e79568fd420188a204008d0a56e8#0b885450f1a1f237f45e28d036c48a6334c1b053 Code]
Disadvantages:


The code uses the laser data to determine the distance from the wall. When the wall is at 0.3 meters the robot stops and returns it has stopped text. Where after the program ends.
* VFH may have difficulty distinguishing overlapping obstacles, especially if they are close together and occupy similar angular regions in the LiDAR's field of view.
* In complex environments with narrow passages or dense clutter, VFH may struggle to find feasible paths due to the limited information provided by the LiDAR sensor and the simplicity of the VFH algorithm.
* VFH performance can be sensitive to parameter settings such as the size of the histogram bins or the threshold for obstacle detection. Tuning these parameters for optimal performance may require extensive experimentation.
* VFH primarily focuses on local obstacle avoidance and may not always generate globally optimal paths, especially in environments with long-range dependencies or complex structures.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/a9befe2eb642e79568fd420188a204008d0a56e8/exercise1_Vincent/videos/exercise1_vincent.mp4 Screen capture exercise 1]
Possible failure scenarios and how to prevent them:


'''Exercise 2, Vincent Hoffmann:'''
Implementation of local and global algorithms:


The robot works well in both cases. The 0.3 meter stop distance causes the robot to stop diagonally away from the wall on the second map, showing the function works in more directions than in front.
=== '''Dynamic Window Approach (DWA)''' ===
'''Implementation:'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Vincent/videos/exercise2_map1_vincent.mp4?ref_type=heads Screen capture exercise 2 map 1]
Consider velocities (𝑣, 𝜔) during 𝑡: possible, admissible, reachable


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Vincent/videos/exercise2_map2_vincent.mp4?ref_type=heads Screen capture exercise 2 map 2]
Reactive collision avoidance based on robot dynamics


Maximizing objective function 𝐺


'''Exercise 1, Timo van der Stokker:'''
* Heading
* Clearance
* Velocity


With the data taken from the laser, the robot keeps checking the distances to the walls where the lidar is aimed at. If the distance that is found is less than the so called stop distance, the velocity of the robot is set to 0 and therefore stops. The stop distance can be easily be changed by altering the stop_distance variable which is now set to 0.2 meters.
Detailed Implementation


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_timo/Screen_captures/DefaultMap_Timo.mp4?ref_type=heads Screen capture exercise 1]
* How to check if a path is valid?
* How discretize 𝑣 and 𝜔?
* How to account for robot size?


'''Advantages:'''


'''Exercise 2, Timo van der Stokker:'''
* Effective at avoiding obstacles detected by the LiDAR sensor in real-time. It dynamically adjusts the robot's velocity and heading to navigate around obstacles while aiming to reach its goal.
* Focuses on local planning, considering only nearby obstacles and the robot's dynamics when generating trajectories. This enables the robot to react quickly to changes in the environment without requiring a global map.


The robot works in both the maps with the stop_distance chosen and does not crash.
'''Disadvantages:'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_timo/Screen_captures/Map1_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 1]
* Can get stuck in local minima, where the robot is unable to find a feasible trajectory to its goal due to obstacles blocking its path. This can occur in highly cluttered environments or when the goal is located in a narrow passage.
* Does not always find the optimal path to the goal, especially in environments with complex structures or long-range dependencies.
* The performance can be sensitive to the choice of parameters, such as the size of the dynamic window or the velocity and acceleration limits of the robot. Tuning these parameters can be challenging and may require empirical testing.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_timo/Screen_captures/Map2_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 2]
'''Demonstration:'''

Latest revision as of 17:31, 15 May 2024

Group members:

Caption
Name student ID
Marten de Klein 1415425
Ruben van de Guchte 1504584
Vincent Hoffmann 1897721
Adis Husanović 1461915
Lysander Herrewijn 1352261
Timo van der Stokker 1228489

Week 1: theoretical exercises

For week one we had to do the following exercises

Question 1: Think of a method to make the robot drive forward but stop before it hits something.

Question 2: Run your simulation on two maps, one containing a large block in front of the robot, the second containing a block the robot can pass by safely when driving straight.

Our answers are detailed below.


Exercise 1, Lysander Herrewijn:

The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning. Code

Screen capture exercise 1


Exercise 2, Lysander Herrewijn:

The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall.

In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2


Exercise 1, Adis Husanovic:

The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle. Code Screen capture exercise 1


Exercise 2, Adis Husanovic:

In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.

In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2



Exercise 1, Marten de Klein:

The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop, which is desired because the robot is not a point but has a width. The code consists of assigning values to speed variables which at the end of the code are send to the robot. The speed variables are first set to a forward velocity and if the laser scanner encounters an object within its safe distance it will set the speed variables to zero. Screen capture exercise 1 Code


Exercise 2, Marten de Klein:

The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2


Exercise 1, Ruben van de Guchte:

The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close. The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and if the robot were going very fast that an unlucky timing might push it within the safety distance. Code Screen capture exercise 1

Exercise 2, Ruben van de Guchte:

After finetuning the width of the robot it works nicely. Screen capture exercise 2



Exercise 1, Vincent Hoffmann:

The code uses the laser data to determine the distance from the wall. When the wall is at 0.3 meters the robot stops and returns it has stopped text. Where after the program ends.

Code Screen capture exercise 1

Exercise 2, Vincent Hoffmann:

The robot works well in both cases. The 0.3 meter stop distance causes the robot to stop diagonally away from the wall on the second map, showing the function works in more directions than in front.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2



Exercise 1, Timo van der Stokker:

With the data taken from the laser, the robot keeps checking the distances to the walls where the lidar is aimed at. If the distance that is found is less than the so called stop distance, the velocity of the robot is set to 0 and therefore stops. The stop distance can be easily be changed by altering the stop_distance variable which is now set to 0.2 meters. Code Screen capture exercise 1

Exercise 2, Timo van der Stokker:

The robot works in both the maps with the stop_distance chosen and does not crash. Screen capture exercise 2 Map 1 Screen capture exercise 2 Map 2

Practical exercise week 1

The laser had less noise than we expected, it is fairly accurate with its measurements. However, only items at height of the laser can be seen, as the laser only works on its own height. For example, when standing in front of the robot, the laser could only detect our shins as two half circles.

When testing our don't crash files on the robot, it was noticed that the stopping distance needed to include the distance the measuring point is from the edge of the robot. This was measured to be approximately 10 cm. After changing this the robot was first tested on a barrier as seen in Robot stopping at barrier

Next we let a person walk in front of it to see if the code would still work. Fortunately, it did, as can be seen in Robot stopping at passing person

Finally we tested an additional code that turns the robot when it sees an obstacle, and then continues. This can be seen in Robot stopping and turning at feet


Local Navigation Assignment week 2

Vector field histogram (VFH)

(Explanation of implementation)

Advantages:

  • Implementing VFH for navigation is relatively straightforward, requiring basic processing of LiDAR data to compute the histogram of obstacles' directions.
  • VFH can generate smooth and collision-free paths for the robot by considering both obstacle avoidance and goal-reaching objectives
  • VFH is computationally efficient and robust to noisy sensor data

Disadvantages:

  • VFH may have difficulty distinguishing overlapping obstacles, especially if they are close together and occupy similar angular regions in the LiDAR's field of view.
  • In complex environments with narrow passages or dense clutter, VFH may struggle to find feasible paths due to the limited information provided by the LiDAR sensor and the simplicity of the VFH algorithm.
  • VFH performance can be sensitive to parameter settings such as the size of the histogram bins or the threshold for obstacle detection. Tuning these parameters for optimal performance may require extensive experimentation.
  • VFH primarily focuses on local obstacle avoidance and may not always generate globally optimal paths, especially in environments with long-range dependencies or complex structures.

Possible failure scenarios and how to prevent them:

Implementation of local and global algorithms:

Dynamic Window Approach (DWA)

Implementation:

Consider velocities (𝑣, 𝜔) during 𝑡: possible, admissible, reachable

Reactive collision avoidance based on robot dynamics

Maximizing objective function 𝐺

  • Heading
  • Clearance
  • Velocity

Detailed Implementation

  • How to check if a path is valid?
  • How discretize 𝑣 and 𝜔?
  • How to account for robot size?

Advantages:

  • Effective at avoiding obstacles detected by the LiDAR sensor in real-time. It dynamically adjusts the robot's velocity and heading to navigate around obstacles while aiming to reach its goal.
  • Focuses on local planning, considering only nearby obstacles and the robot's dynamics when generating trajectories. This enables the robot to react quickly to changes in the environment without requiring a global map.

Disadvantages:

  • Can get stuck in local minima, where the robot is unable to find a feasible trajectory to its goal due to obstacles blocking its path. This can occur in highly cluttered environments or when the goal is located in a narrow passage.
  • Does not always find the optimal path to the goal, especially in environments with complex structures or long-range dependencies.
  • The performance can be sensitive to the choice of parameters, such as the size of the dynamic window or the velocity and acceleration limits of the robot. Tuning these parameters can be challenging and may require empirical testing.

Demonstration: