Mobile Robot Control 2024 The Iron Giant: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(18 intermediate revisions by 5 users not shown)
Line 23: Line 23:
|}
|}


== Week 2 exercises ==
== Week 1: theoretical exercises ==
'''{Setup} Exercise 1, [NAME]:'''
For week one we had to do the following exercises:  


[link to Gitlab code with comments (We should discuss how we want to format the exercises branch of git.)]
Question 1: Think of a method to make the robot drive forward but stop before it hits something.  


[Design choices, what distance to stop, how does your code retrieve the distance, did you find any limitations?]
Question 2: Run your simulation on two maps, one containing a large block in front of the robot, the second containing a block the robot can pass by safely when driving straight. 


[Screen capture of simulation driving and stopping]
These exercises were performed individually by the members leading to different outcomes. 


'''{Setup} Exercise 2, [NAME]:'''
There were a lot of similarities between the answers. Every group member used the laser data to determine if object were close. They implemented a way to loop over the laser range data and check the individual values to see whether that value was lower than a safety threshold. If this would be the case all members changed the signal that would be sent to the motors to a zero signal. 


[link to Gitlab screen capture, I guess we want a video to see the robots stopping, lets aim for 20s max per person. I would recommend https://wiki.ubuntuusers.de/recordMyDesktop/, very easy to use (! Note Linux has built in Screen capture software.... use ctrl, alt, shift r to start and stop recording, capture is stored in videos). To make it more compact, convert it to mp4 in you machine and it can then easily be pushed to gitlab. If people feel this is overkill, lets discuss wednesday.][what did you notice?]
But there were some differences too, Lysander made his robot start turning when the object was detected. In the next loop the robot would therefore get different laser scan data and after a few loops the object might be outside the angles the laser scanner can check and so it will drive forward again as it has turned away from the obstacle.
 
{Vergeet dit niet in het Engels te schrijven!}


Ruben decided to not loop over all laser ranges but only check the ones in front of the robot. To determine which laser data actually represents the area the robot is going to drive to, a geometric calculation is made by the code, using the arc tangent of the required safety distance and the width of the robot to determine the maximum angle the laser data needs to check. Afterwards when checking whether the values of the laser data are not too big, it adds a geometric term to make sure the safety distance is consistent when looking parallel to the driving direction of the robot and not shaped like a circle due to the lidar. 


And overview of the codes and video demonstration can be found in the table below. 
{| class="wikitable"
|+
|Name
|Code
|Video exercise 1
|Video exercise 2
|-
|'''Lysander Herrewijn'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/dontcrash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map1_lysander.mp4?ref_type=heads Screen capture exercise 2 map 1]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map2_lysander.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Adis Husanovic'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/src/dont_crash.cpp Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]  [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Marten de Klein'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/src/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/default_map_Marten.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Ruben van de Guchte'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]'''
|-
|'''Vincent Hoffmann'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/src/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise1_vincent.mp4?ref_type=heads Screen capture exercise 1]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map1_vincent.mp4?ref_type=heads Screen capture exercise 2 map 1]'''    '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map2_vincent.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
|-
|'''Timo van der Stokker'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/src/dont_crash.cpp?ref_type=heads Code]'''
|'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/DefaultMap_Timo.mp4?ref_type=heads Screen capture exercise 1]'''
|[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_timo/Screen_captures/Map1_Timo.mp4?ref_type=heads '''Screen capture exercise 2 Map 1''']    '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/Map2_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 2]'''
|} 


'''Exercise 1, Lysander Herrewijn:'''
'''Exercise 1, Lysander Herrewijn:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1lysander?ref_type=heads Code]'''
The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning.  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/dontcrash.cpp?ref_type=heads Code]'''  


The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning.  
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise1_lysander.mp4?ref_type=heads Screen capture exercise 1]'''




'''Exercise 2, Lysander Herrewijn:'''
'''Exercise 2, Lysander Herrewijn:'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise2map1_lysander.mp4?ref_type=heads '''Screen capture exercise 2 map 1''']
The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall. 
 
In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.


The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall.  
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map1_lysander.mp4?ref_type=heads Screen capture exercise 2 map 1]'''            '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Lysander/exercise2map2_lysander.mp4?ref_type=heads Screen capture exercise 2 map 2]'''


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/exercise2map2_lysander.mp4?ref_type=heads '''Screen capture exercise 2 map 1''']


In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.


'''Exercise 1, Adis Husanovic:'''
'''Exercise 1, Adis Husanovic:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/main/exercise1_Adis?ref_type=heads Code]'''
The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/src/dont_crash.cpp Code]'''        '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''


The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_1.webm?ref_type=heads Screen capture exercise 1]'''
'''Exercise 2, Adis Husanovic:'''


In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.


'''Exercise 2, Adis Husanovic:'''
In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.


In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]  [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test1.webm?ref_type=heads Screen capture exercise 2 map 1]'''


In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_Adis/Screen_Captures/ScreenCapture_Exercise_2_Test2.webm?ref_type=heads Screen capture exercise 2 map 2]'''




'''Exercise 1, Marten de Klein:'''
'''Exercise 1, Marten de Klein:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/exercises/exercise1_Marten?ref_type=heads Code]'''
The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop, which is desired because the robot is not a point but has a width. The code consists of assigning values to speed variables which at the end of the code are send to the robot. The speed variables are first set to a forward velocity and if the laser scanner encounters an object within its safe distance it will set the speed variables to zero.
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/default_map_Marten.mp4?ref_type=heads Screen capture exercise 1] [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/src/dont_crash.cpp?ref_type=heads Code]'''
 
 
'''Exercise 2, Marten de Klein:'''
 
The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''        '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
 
 
'''Exercise 1, Ruben van de Guchte:'''
 
The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close.  The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and if the robot were going very fast that an unlucky timing might push it within the safety distance. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/dont_crash.cpp?ref_type=heads Code]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]'''
 
'''Exercise 2, Ruben van de Guchte:'''
 
After finetuning the width of the robot it works nicely. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/Exercise1_Ruben/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]'''
 
 
 
 
'''Exercise 1, Vincent Hoffmann:'''
 
The code uses the laser data to determine the distance from the wall. When the wall is at 0.3 meters the robot stops and returns it has stopped text. Where after the program ends.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/src/dont_crash.cpp?ref_type=heads Code]'''  '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise1_vincent.mp4?ref_type=heads Screen capture exercise 1]'''
 
'''Exercise 2, Vincent Hoffmann:'''
 
The robot works well in both cases. The 0.3 meter stop distance causes the robot to stop diagonally away from the wall on the second map, showing the function works in more directions than in front.
 
'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map1_vincent.mp4?ref_type=heads Screen capture exercise 2 map 1]'''    '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_Vincent/videos/exercise2_map2_vincent.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
 
 
 


The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop.
'''Exercise 1, Timo van der Stokker:'''


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/default_map_Marten.mp4 Screen capture exercise 1]'''
With the data taken from the laser, the robot keeps checking the distances to the walls where the lidar is aimed at. If the distance that is found is less than the so called stop distance, the velocity of the robot is set to 0 and therefore stops. The stop distance can be easily be changed by altering the stop_distance variable which is now set to 0.2 meters. '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/src/dont_crash.cpp?ref_type=heads Code]''' '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/DefaultMap_Timo.mp4?ref_type=heads Screen capture exercise 1]'''


'''Exercise 2, Timo van der Stokker:'''


'''Exercise 2, Marten de Klein:'''
The robot works in both the maps with the stop_distance chosen and does not crash.    [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/exercise1_timo/Screen_captures/Map1_Timo.mp4?ref_type=heads '''Screen capture exercise 2 Map 1''']          '''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/main/Weekly_exercises/Week1/exercise1_timo/Screen_captures/Map2_Timo.mp4?ref_type=heads Screen capture exercise 2 Map 2]'''
 
== '''Practical exercise week 1''' ==
The laser had less noise than we expected, it is fairly accurate with its measurements. However, only items at height of the laser can be seen, as the laser only works on its own height. For example, when standing in front of the robot, the laser could only detect our shins as two half circles.
 
When testing our don't crash files on the robot, it was noticed that the stopping distance needed to include the distance the measuring point is from the edge of the robot. This was measured to be approximately 10 cm. After changing this the robot was first tested on a barrier as seen in [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Practical_exercise1/Stop_barrier.mp4?ref_type=heads '''Robot stopping at barrier''']
 
Next we let a person walk in front of it to see if the code would still work. Fortunately, it did, as can be seen in [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Practical_exercise1/stop_moving_feet.mp4?ref_type=heads '''Robot stopping at passing person''']
 
Finally we tested an additional code that turns the robot when it sees an obstacle, and then continues. This can be seen in [https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/Practical_exercise1/Stop_turn_feet.mp4?ref_type=heads '''Robot stopping and turning at feet''']
 
 
== '''Local Navigation Assignment week 2''' ==
 
=== '''Vector field histogram (VFH)''' ===
The simplified vector field histogram approach was initially implemented as follows.
 
The robot starts out with a goal gotten from its global navigation, its laser data gotten from the lidar and its own position, which it keeps track of internally.  The laser data points are grouped together in evenly spaced brackets. For the individual brackets the code checks how many points are lower than a safety threshold and saves this value.
 
Next it calculates the direction of the goal by computing the angle between its own position and the goal position. It then checks whether the angle of towards the goal is unoccupied by checking the values of the bracket corresponding to that angle and some brackets around that one specified by its bracket safety width parameter. If the direction towards the goal is occupied the code will check the brackets to the left and to the right and save the closest unoccupied angle at either side. It then picks whichever angle is smaller, left or right. and sets that angle as its new goal.
 
Afterwards it compares its own angle with the goal angle and drives forwards if it aligns within a small margin or turns towards the direction of the goal. It also checks whether it has arrived at the goal and if that is the case does not move at all and sends the information that it is at the goal position to the global navigation.
 
This initial implementation had some oversights and edge cases that we came across when testing using the simulator and the real robot.


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/map1_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 1]'''
Advantages:


'''[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/exercise1_Marten/Captures/map2_test_Marten.mp4?ref_type=heads Screen capture exercise 2 map 2]'''
* Implementing VFH for navigation is relatively straightforward, requiring basic processing of LiDAR data to compute the histogram of obstacles' directions.
* VFH can generate smooth and collision-free paths for the robot by considering both obstacle avoidance and goal-reaching objectives
* VFH is computationally efficient and robust to noisy sensor data


The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.
Disadvantages:


* VFH may have difficulty distinguishing overlapping obstacles, especially if they are close together and occupy similar angular regions in the LiDAR's field of view.
* In complex environments with narrow passages or dense clutter, VFH may struggle to find feasible paths due to the limited information provided by the LiDAR sensor and the simplicity of the VFH algorithm.
* VFH performance can be sensitive to parameter settings such as the size of the histogram bins or the threshold for obstacle detection. Tuning these parameters for optimal performance may require extensive experimentation.
* VFH primarily focuses on local obstacle avoidance and may not always generate globally optimal paths, especially in environments with long-range dependencies or complex structures.


'''Exercise 1, [Ruben van de Guchte]:'''
Possible failure scenarios and how to prevent them:


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/tree/exercises/Exercise1_Ruben?ref_type=heads Code]
Implementation of local and global algorithms:


The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close.
=== '''Dynamic Window Approach (DWA)''' ===
'''Implementation:'''


The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and ifthe robot were going very fast that an unlucky timing might push it within the safety distance.
Consider velocities (𝑣, 𝜔) during 𝑡: possible, admissible, reachable


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_13-27-37.mp4?ref_type=heads Screen capture exercise 1]
Reactive collision avoidance based on robot dynamics


Maximizing objective function 𝐺


'''Exercise 2, [Ruben van de Guchte]:'''
* Heading
* Clearance
* Velocity


After finetuning the width of the robot it works nicely.
Detailed Implementation


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/exercises/screencaptures_lysander/Videos_Ruben/Mobile_roboto__Running__-_Oracle_VM_VirtualBox_2024-04-29_14-07-45.mp4?ref_type=heads Screen capture exercise 2]
* How to check if a path is valid?
* How discretize 𝑣 and 𝜔?
* How to account for robot size?


'''Advantages:'''


'''Exercise 1, Vincent:'''
* Effective at avoiding obstacles detected by the LiDAR sensor in real-time. It dynamically adjusts the robot's velocity and heading to navigate around obstacles while aiming to reach its goal.
* Focuses on local planning, considering only nearby obstacles and the robot's dynamics when generating trajectories. This enables the robot to react quickly to changes in the environment without requiring a global map.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/commit/a9befe2eb642e79568fd420188a204008d0a56e8#0b885450f1a1f237f45e28d036c48a6334c1b053 Code]
'''Disadvantages:'''


The code uses the laser data to determine the distance from the wall. When the wall is at 0.3 meters the robot stops and returns it has stopped text. Where after the program ends.
* Can get stuck in local minima, where the robot is unable to find a feasible trajectory to its goal due to obstacles blocking its path. This can occur in highly cluttered environments or when the goal is located in a narrow passage.
* Does not always find the optimal path to the goal, especially in environments with complex structures or long-range dependencies.
* The performance can be sensitive to the choice of parameters, such as the size of the dynamic window or the velocity and acceleration limits of the robot. Tuning these parameters can be challenging and may require empirical testing.


[https://gitlab.tue.nl/mobile-robot-control/mrc-2024/the-iron-giant/-/blob/a9befe2eb642e79568fd420188a204008d0a56e8/exercise1_Vincent/videos/exercise1_vincent.mp4 Screen capture exercise 1]
'''Demonstration:'''

Latest revision as of 12:06, 21 May 2024

Group members:

Caption
Name student ID
Marten de Klein 1415425
Ruben van de Guchte 1504584
Vincent Hoffmann 1897721
Adis Husanović 1461915
Lysander Herrewijn 1352261
Timo van der Stokker 1228489

Week 1: theoretical exercises

For week one we had to do the following exercises:

Question 1: Think of a method to make the robot drive forward but stop before it hits something.

Question 2: Run your simulation on two maps, one containing a large block in front of the robot, the second containing a block the robot can pass by safely when driving straight.

These exercises were performed individually by the members leading to different outcomes.

There were a lot of similarities between the answers. Every group member used the laser data to determine if object were close. They implemented a way to loop over the laser range data and check the individual values to see whether that value was lower than a safety threshold. If this would be the case all members changed the signal that would be sent to the motors to a zero signal.

But there were some differences too, Lysander made his robot start turning when the object was detected. In the next loop the robot would therefore get different laser scan data and after a few loops the object might be outside the angles the laser scanner can check and so it will drive forward again as it has turned away from the obstacle.

Ruben decided to not loop over all laser ranges but only check the ones in front of the robot. To determine which laser data actually represents the area the robot is going to drive to, a geometric calculation is made by the code, using the arc tangent of the required safety distance and the width of the robot to determine the maximum angle the laser data needs to check. Afterwards when checking whether the values of the laser data are not too big, it adds a geometric term to make sure the safety distance is consistent when looking parallel to the driving direction of the robot and not shaped like a circle due to the lidar.

And overview of the codes and video demonstration can be found in the table below.

Name Code Video exercise 1 Video exercise 2
Lysander Herrewijn Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Adis Husanovic Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Marten de Klein Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Ruben van de Guchte Code Screen capture exercise 1 Screen capture exercise 2
Vincent Hoffmann Code Screen capture exercise 1 Screen capture exercise 2 map 1 Screen capture exercise 2 map 2
Timo van der Stokker Code Screen capture exercise 1 Screen capture exercise 2 Map 1 Screen capture exercise 2 Map 2

Exercise 1, Lysander Herrewijn:

The code utilizes the minimum of the scan data. It loops over all data and saves the smallest distance. If the distance is smaller than 0.3, the robot drives forward. However, if the smallest distance is smaller than 0.3, it will rotate in counter clockwise direction. When new scanner data is available, the distance given by the smallest laser data is redefined. At a certain point, the robot has turned enough such that it will drive forward again until it meets a new wall. The distance of 0.3 is chosen, as it gives the robot enough space to make its turn, with a margin of error in the scanner data and for turning. Code

Screen capture exercise 1


Exercise 2, Lysander Herrewijn:

The robot behaves as expected. It drives forward, gets closer to the wall, the scanner data indicates the robot is getting to close and it starts to turn in clockwise direction. It goes forward again until it gets too close to the left wall.

In this case, the robot can pass the block slightly. However, as the scanner data indicates a wall is too close, it stops driving forward and start turning. Do notice the turn is less sharp as in previous example, as it needs to turn less degrees in counter clockwise direction for the scanner to not observe the obstacle. At this point, it can move forward and the robot is sure it will not hit anything in front of it.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2


Exercise 1, Adis Husanovic:

The current method ensures that the mobile robot moves forward while avoiding collisions with obstacles closer than 0.15 m in its path. This approach relies on monitoring of the environment using an onboard laser range sensor to detect potential obstacles. As the robot advances, it compares distance readings from the sensors with a predefined threshold distance, representing the desired safety margin between the robot and any detected object. When detecting an obstacle within this threshold distance, the robot stops before reaching the obstacle. Code Screen capture exercise 1


Exercise 2, Adis Husanovic:

In both test scenarios conducted in different maps from exercise 2, the robot shows the desired behavior without any issues. In the first map, the robot stops before reaching the object, showing its ability to detect and respond to obstacles effectively.

In the second map, the robot navigates along the side of the object and comes to a stop when encountering the wall, thereby avoiding any collision.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2



Exercise 1, Marten de Klein:

The laser data is used to stop the robot if the distance to an object is smaller than 0.15 m. Since the robot only has to stop for this exercise the most straightforward method is to stop if any of the laser data becomes smaller than this distance. This also means that if the robot moves past an object very close the robot will stop, which is desired because the robot is not a point but has a width. The code consists of assigning values to speed variables which at the end of the code are send to the robot. The speed variables are first set to a forward velocity and if the laser scanner encounters an object within its safe distance it will set the speed variables to zero. Screen capture exercise 1 Code


Exercise 2, Marten de Klein:

The robot functions in both maps as desired. The robot stops before the object in the first map. The robot moves along the side of the object in the second map and stops when encountering the wall.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2


Exercise 1, Ruben van de Guchte:

The code calculates which points from the data are relevant for bumping into objects based on the safe distance specified in the script. It then checks whether there lidar returns an object in front of it that is too close. The robot now stops 0.5 meters before an obstacle, but this can be easily finetuned using the safety_distance variable. It should be taken into account that the lidar scans are not a continuous function and if the robot were going very fast that an unlucky timing might push it within the safety distance. Code Screen capture exercise 1

Exercise 2, Ruben van de Guchte:

After finetuning the width of the robot it works nicely. Screen capture exercise 2



Exercise 1, Vincent Hoffmann:

The code uses the laser data to determine the distance from the wall. When the wall is at 0.3 meters the robot stops and returns it has stopped text. Where after the program ends.

Code Screen capture exercise 1

Exercise 2, Vincent Hoffmann:

The robot works well in both cases. The 0.3 meter stop distance causes the robot to stop diagonally away from the wall on the second map, showing the function works in more directions than in front.

Screen capture exercise 2 map 1 Screen capture exercise 2 map 2



Exercise 1, Timo van der Stokker:

With the data taken from the laser, the robot keeps checking the distances to the walls where the lidar is aimed at. If the distance that is found is less than the so called stop distance, the velocity of the robot is set to 0 and therefore stops. The stop distance can be easily be changed by altering the stop_distance variable which is now set to 0.2 meters. Code Screen capture exercise 1

Exercise 2, Timo van der Stokker:

The robot works in both the maps with the stop_distance chosen and does not crash. Screen capture exercise 2 Map 1 Screen capture exercise 2 Map 2

Practical exercise week 1

The laser had less noise than we expected, it is fairly accurate with its measurements. However, only items at height of the laser can be seen, as the laser only works on its own height. For example, when standing in front of the robot, the laser could only detect our shins as two half circles.

When testing our don't crash files on the robot, it was noticed that the stopping distance needed to include the distance the measuring point is from the edge of the robot. This was measured to be approximately 10 cm. After changing this the robot was first tested on a barrier as seen in Robot stopping at barrier

Next we let a person walk in front of it to see if the code would still work. Fortunately, it did, as can be seen in Robot stopping at passing person

Finally we tested an additional code that turns the robot when it sees an obstacle, and then continues. This can be seen in Robot stopping and turning at feet


Local Navigation Assignment week 2

Vector field histogram (VFH)

The simplified vector field histogram approach was initially implemented as follows.

The robot starts out with a goal gotten from its global navigation, its laser data gotten from the lidar and its own position, which it keeps track of internally. The laser data points are grouped together in evenly spaced brackets. For the individual brackets the code checks how many points are lower than a safety threshold and saves this value.

Next it calculates the direction of the goal by computing the angle between its own position and the goal position. It then checks whether the angle of towards the goal is unoccupied by checking the values of the bracket corresponding to that angle and some brackets around that one specified by its bracket safety width parameter. If the direction towards the goal is occupied the code will check the brackets to the left and to the right and save the closest unoccupied angle at either side. It then picks whichever angle is smaller, left or right. and sets that angle as its new goal.

Afterwards it compares its own angle with the goal angle and drives forwards if it aligns within a small margin or turns towards the direction of the goal. It also checks whether it has arrived at the goal and if that is the case does not move at all and sends the information that it is at the goal position to the global navigation.

This initial implementation had some oversights and edge cases that we came across when testing using the simulator and the real robot.

Advantages:

  • Implementing VFH for navigation is relatively straightforward, requiring basic processing of LiDAR data to compute the histogram of obstacles' directions.
  • VFH can generate smooth and collision-free paths for the robot by considering both obstacle avoidance and goal-reaching objectives
  • VFH is computationally efficient and robust to noisy sensor data

Disadvantages:

  • VFH may have difficulty distinguishing overlapping obstacles, especially if they are close together and occupy similar angular regions in the LiDAR's field of view.
  • In complex environments with narrow passages or dense clutter, VFH may struggle to find feasible paths due to the limited information provided by the LiDAR sensor and the simplicity of the VFH algorithm.
  • VFH performance can be sensitive to parameter settings such as the size of the histogram bins or the threshold for obstacle detection. Tuning these parameters for optimal performance may require extensive experimentation.
  • VFH primarily focuses on local obstacle avoidance and may not always generate globally optimal paths, especially in environments with long-range dependencies or complex structures.

Possible failure scenarios and how to prevent them:

Implementation of local and global algorithms:

Dynamic Window Approach (DWA)

Implementation:

Consider velocities (𝑣, 𝜔) during 𝑡: possible, admissible, reachable

Reactive collision avoidance based on robot dynamics

Maximizing objective function 𝐺

  • Heading
  • Clearance
  • Velocity

Detailed Implementation

  • How to check if a path is valid?
  • How discretize 𝑣 and 𝜔?
  • How to account for robot size?

Advantages:

  • Effective at avoiding obstacles detected by the LiDAR sensor in real-time. It dynamically adjusts the robot's velocity and heading to navigate around obstacles while aiming to reach its goal.
  • Focuses on local planning, considering only nearby obstacles and the robot's dynamics when generating trajectories. This enables the robot to react quickly to changes in the environment without requiring a global map.

Disadvantages:

  • Can get stuck in local minima, where the robot is unable to find a feasible trajectory to its goal due to obstacles blocking its path. This can occur in highly cluttered environments or when the goal is located in a narrow passage.
  • Does not always find the optimal path to the goal, especially in environments with complex structures or long-range dependencies.
  • The performance can be sensitive to the choice of parameters, such as the size of the dynamic window or the velocity and acceleration limits of the robot. Tuning these parameters can be challenging and may require empirical testing.

Demonstration: