Mobile Robot Control 2024 Rosey: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(66 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 
== '''Group members:''' ==
===Group members:===
{| class="wikitable"
{| class="wikitable"
|+Caption
|+
!Name!!student ID
!Name!!student ID
|-
|-
|Tessa Janssen||1503782
|Tessa Janssen||1503782
|-
|-
|Carmen van de Ven
|Jose Puig Talavera
|
|2011174
|-
|Jose Puig talavera
|
|-
|-
|Ruben Dragt
|Ruben Dragt
|
|1511386
|-
|-
|Thijs Beurskens
|Thijs Beurskens
|
|1310909
|-
|-
|Pablo Ruiz Beltri
|Pablo Ruiz Beltri
|
|2005611
|-
|-
|Riccardo Dalferro Nucci
|Riccardo Dalferro Nucci
|
|2030039
|}
|}
== '''<big>Week 1 - Don't crash code</big>:''' ==
=== Exercise 1 - The art of not crashing. ===
==== Overall approach ====
Every member of the group worked on this exercise separately, after which we combined and discussed the different approaches. Though each implementation the members of our group came up with is slightly different, they all have the same starting approach. The robot is constantly looping through the laser data and when the returned laser range of one of the laser points is within a predefined distance, the robot is commanded to act on this. Additionally, most approaches only loop through the front/middle part of the laser data to make sure only obstacles that are actually in the robots path are taken into account. The different methods of going about this are explained below.
<big>'''Implementation 1''': '''Stopping'''</big>
''Riccardo Dalferro Nucci & Ruben Dragt''
In this implementation the instruction to the robot is the most simple. The robot is instructed to keep moving forward using the io.sendBaseReference function and when the laser returns a distance within 0.3 meters, the robot is commanded to stop by setting the forward velocity in io.sendBaseReference back to 0. Using this method means the robot will not continue moving as long as the obstacle is within the 0.3 meters. If the robot is in front of a wall it will stay there forever, but with a moving obstacle such as a human walking by, the robot will eventually move forward again until the next obstacle. Video of the simulation can be found [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EbWIje2GvzRInrJRvwxO44cBloUo262_6LwdQOqhViCpLA?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=GVbDab here].
'''<big>Implementation 2'':'' Turning away</big>'''
Using this method, the robot is not instructed to just stop but to also turn away from the obstacle. The added benefit of this approach is that the robot will be able to start moving forward again, as the laser will eventually stop returning obstacles in the set distance.
* '''Method 1''' ''Tessa Janssen -'' In this code, the robot loops over part of the data by setting the for loop from <code>scan.ranges.size()/2 - range</code> to <code>scan.ranges.size/2 + range</code> (setting variable range at 150 means a view of approximately 1.2 rad). Then, when an obstacle is detected in within a set distance e.g. 0.5 m, the forward velocity is set to 0, but the angular velocity is given a positive value. This results in the robot turning left when it has reached an obstacle. When there are no longer any obstacles the angular velocity is set back to 0 and the forward velocity is given a positive value. The downside of always turning left is that the robot can get easily get stuck in corners and will not take into account efficiency. Video of the simulation can be found [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EbYqk0Ff0CFAjBtN-R8STfcB8drJKfsxID3TAu6xb4ybGw?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=0QKdF1 here].
* '''Method 2''' ''Pablo Ruiz Beltri -'' In this approach, a safety distance and a cone size (in number of laser points) are set. The cone is centered in front of the robot. While we are properly connected, the laser data in this cone is verified to be greater than the safety distance. If it is, the robot moves forward, if not the robot rotates. To keep a forward path, the direction of rotation is determined by assessing if the safety distance violation was in the left or the right of the cone. After testing this approach, it is concluded that it is vulnerable to corners, where the robot is located where safety distance violations come equally from left and right, leaving the robot in a blocked position.
* '''Method 3''' ''Thijs Beurskens -'' Here, the cone of vision of the robot is determined by setting the minimum and maximum angle, which is then translated to the amount of points the for loop should cover. The turning direction of the robot is determined by checking on which side of the robot the obstacles are closer and thus turning where there is more space. This approach has again the benefit of being able to keep moving around a map/obstacle course while also being more clever on where it can go. This approach makes the robot less likely to get stuck in certain areas and makes it more robust to changing environments.
=== Exercise 2 - Testing your don't crash ===
During the first testing session it became clear that the parameters that were used in simulation needed to be adjusted to the robot in real life. Especially the velocity needed to be tuned down when using Coco/Bobo as they do not have the speed limiter built in. Moving too fast towards an obstacle meant that when it saw an obstacle within the safety range, it did not have enough time to stop before hitting the obstacle. Though tuning the velocities and obstacle distances was a doable fix and the code from method 3 (Thijs) ended up working very nicely in an environment with multiple different obstacles. The video showing this approach can be seen [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EfOj6xPD00tFkXR1Ucylq2QBkDdMrKnozQXRP-P926NqbQ?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=fG8zYF here].
<nowiki>*</nowiki>INSERT SCREENCAP FROM TEST MAPS??*
== '''<big>Week 2 - Local navigation:</big>''' ==
=== '''<big>Vector field histogram:</big>''' ===
''Team 1: Thijs Beurskens, Pablo Ruiz Beltri & Riccardo Dalferro Nucci''
==== <big>Concept</big> ====
==== <big>Simulation and Implementation</big> ====
==== <big>Reflection</big> ====
=== <big>'''Artificial potential fields:'''</big> ===
''Team 2: Tessa Janssen, Jose Puig Talavera & Ruben Dragt''
==== <big>Concept</big> ====
The main idea behind an artificial potential field is calculating both an attractive potential towards a target and a repulsive potential away from obstacles, which are summed together to form the potential field. The resulting force vector is calculated by taking the negative of the gradient of the potential field. The attractive potential is determined by the following formula
<math> U_{att}(q) = \frac{1}{2} * k_{a} * (||q-q_{goal}||)² </math>,
which depends on the difference between the current position of the robot and the target position. The target position was set as a point 6 meters further in the y direction than the starting position, the current pose was calculated by using odometry data from the robot (with starting coordinate (1.5, 0, 0)). Because the odometry is not perfect, this introduced a small error to the final position of the robot, which was not corrected for in this exercise. The repulsive force is given by
<math> U_{rep,j}(q) = \frac{1}{2} * k_{rep,j}(\frac{1}{(||q-q_{j}||} - \frac{1}{ρ_{o}})^2 </math> if <math> ||q-q_{j}|| ≤ ρ_{o} </math>
or
<math> U_{rep,j}(q) = 0 </math> if <math> ||q-q_{j}|| ≥ ρ_{o} </math>,
which uses the laser data to determine the distance from the robot to an obstacle. For this specific implementation, each laser point in the for-loop returning a distance that fell within the predefined space buffer ρ<sub>o</sub> was registered as a separate obstacle, with the corresponding repulsive potential being added to the total potential field. The constants k<sub>att</sub> and k<sub>rep</sub> were used to adjust the relative importance of attracting and repulsing.
The resulting force vector at the current location of the robot was used as input to the robot by calculating the angle corresponding to the vector. This was used as reference angular velocity for the robot, while y-velocity was kept mostly constant. Mostly constant here means that in the original implementation the velocity of the robot was kept low to allow the robot to have enough time to respond to all obstacles, but after testing, code was added that made sure the robot picked up its speed when there were no obstacles in sight.
==== <big>Simulation and Implementation</big> ====
[[File:Rosey APF.gif|thumb|Hero navigating a corridor using the artificial potential fields approach.]]
Testing the code in simulation revealed that tuning k<sub>att</sub>, k<sub>rep</sub>, ρ<sub>o</sub>, and the forward and angular velocity were very important. Map 1 compared to map 4, for example, showed that when there are not a lot of obstacles, the robot can get away with higher attraction to the target, making the path more efficient. With more obstacles, however, the robot would get too close to the walls and the k<sub>rep</sub> had to be retuned as well as increasing ρ<sub>o</sub>.  In the first few iterations of the code, the robots movements were very jittery. This was due to the calculation of the angle of the force vector. When moving up in difficulty through the different maps, the forward velocity and angular velocity had to be reduced in order to get a smooth trajectory. Screen recordings of the simulation were made for both [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EXkSo2-NzstDlry77uI186QBsH_PBtqqLNfHkmaPn4NbQQ?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=YEBE2A map 1] and [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/Ee8QrdLO6U5MgytQWqBFXyABAd04c6NMiUA38JUTXUk9HQ?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=bOBmyw map 4].
One of the first test sessions after the code was working well in simulations, already showed a smooth trajectory as can be seen in the video [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/ESuFDYMs7W5EgqFPAkyVo4EB9VZjwY4EeP4p6ch8TUC5Zg?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=yMmbLF here]. The robot moves around the objects to the target approximately 6 meters away, but at a pace that is quite boring and seems inefficient from a users point of view. Code was then added to make the robot increase its speed when it no longer saw any objects within the buffer ρ<sub>o</sub>. This helps slightly with efficiency, but only once it was already outside of the maze we set up. Another video including this speed increase can be found [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EbqOZ0alUV9Br0v7jJX0Y14BzVKGPOTU-qZTtvRz_l9Vsg?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=icfGWv here].
==== <big>Reflection</big> ====
'''Advantages of the approach'''
One advantage about the potential fields is that it makes sense intuitively: walk away from obstacles and walk towards your goal. The mathematical components that are required to calculate the potentials, gradients and corresponding vectors are also familiar methods, making it a very doable method to implement from scratch. Another advantage is that  the repulsion to obstacles is calculated on real-time laser data. This makes the robot quite robust to changing environments as new obstacles (walls around the corner or people standing in the way) are incorporated into the navigation immediately.
'''Disadvantages of the approach'''
A disadvantage that is clear from both the simulation and the test sessions is that in order for the robot to be able to recognize all obstacles in time, its velocity needs to be quite low. This speed-accuracy trade-off is especially evident in more complex environments with many obstacles. And although it is an advantages that the robot updates the obstacles around it constantly, the computational speed is not always sufficient to recognize moving obstacles such as people. In addition, in case of local minima in the gradient, the robot will get stuck while not having reached its target. Another disadvantage that was best seen in the simulation was that the robot most from left to right a lot when passing through a narrow street as the resulting force vector keeps shifting from left to right.
'''What could result in failure?/How would you prevent these scenarios from happening?'''
One situation that could lead to failure is a local minimum as explained above. In case the robot is faced with a wall directly between itself and the target and the repulsive potential is equal and exactly opposite to the attraction potential, the result will be net zero and the robot will not move. This might be prevented by including some code that makes sure that if the input to the robot is zero, but it has not yet reached it target it will go in a certain direction. This correction should of course be careful of nearby obstacles. Another potential failure is not recognizing a living obstacle passing by and crashing into it. Completely eliminating this is hard, but it can be mitigated by having a low speed and increasing the range in which it classifies obstacles. Both of these give the algorithm more time to process the obstacle.
== '''Week 3 - Global navigation:''' ==
===== Global navigation 1 - A* algorithm: =====
''Pablo Ruiz Beltri, Riccardo Dalferro Nucci & Ruben Dragt''
The A* algorithm is a widely used pathfinding and graph traversal algorithm. It combines the benefits of Dijkstra's Algorithm and Greedy Best-First Search to find the shortest path between nodes in a weighted graph.  The code follows the next structure:
# '''Initialization:'''
#* We start with an open set (open_nodes) containing the initial node (_start_nodeID), and another set (closed_nodes) where the visited nodes will be stored, so they are only visited once.
#* Initialize a cost map to keep track of the cost from the start node to each node (h-cost), using calculate_distance().
#* Initialize a heuristic map estimating the cost from each node to the goal (g-cost), this are initialized to infinity and will be calculated for each visited node to the surrounding nodes. This way the nodes with no edge connecting them will have infinite value.
#* The f-cost is the summation of the g-cost and h-cost. The f-cost is calculated for the initial node.
# '''Processing:'''
#* While the open set is not empty, select the node with the lowest f-cost from the open nodes set.
#* If this node is the goal, the path is found.
#* Otherwise, move the node from the open set to the closed set.
# '''Expansion:'''
#* For each neighbor of the current node, calculate the tentative g-cost.
#* If this tentative g-cost is lower than the previously recorded g-cost for that neighbor, update it and change the parent node to the current node.
#* Set the f-cost for the neighbor (f = new g + h).
#* If the neighbor is not in the open set, add it.
# '''Repeat:'''
#* Continue the process until the goal is reached or the open set is empty, indicating no path exists.
# '''Store the optimal path:'''
#* To store the optimum path from the initial node to the goal (path_node_IDs), we start at the goal node and move to its parent node, continuing the storing of parents node until this becomes the inital node.
===== Global navigation 2 -: Probabilistic Road Map =====
''Tessa Janssen, Jose Puig Talavera & Thijs Beurskens''
'''Inflate the map'''
First exercise is to inflate the occupied spaces in the map to adjust for the size of the robot. This was done by looping of the the image by column and row and check if the pixel had a value below the occupancy threshold that was already defined. If this was the case, a matrix of size radius x radius was looped through, centered around the pixel. Then every pixel around was given value 0. Radius here is a variable that can be adjusted based on what robot is used, as Coco/Bobo would need less inflation than Hero.
'''Check vertices'''
'''Check edges'''

Latest revision as of 09:59, 26 May 2024

Group members:

Name student ID
Tessa Janssen 1503782
Jose Puig Talavera 2011174
Ruben Dragt 1511386
Thijs Beurskens 1310909
Pablo Ruiz Beltri 2005611
Riccardo Dalferro Nucci 2030039

Week 1 - Don't crash code:

Exercise 1 - The art of not crashing.

Overall approach

Every member of the group worked on this exercise separately, after which we combined and discussed the different approaches. Though each implementation the members of our group came up with is slightly different, they all have the same starting approach. The robot is constantly looping through the laser data and when the returned laser range of one of the laser points is within a predefined distance, the robot is commanded to act on this. Additionally, most approaches only loop through the front/middle part of the laser data to make sure only obstacles that are actually in the robots path are taken into account. The different methods of going about this are explained below.

Implementation 1: Stopping

Riccardo Dalferro Nucci & Ruben Dragt

In this implementation the instruction to the robot is the most simple. The robot is instructed to keep moving forward using the io.sendBaseReference function and when the laser returns a distance within 0.3 meters, the robot is commanded to stop by setting the forward velocity in io.sendBaseReference back to 0. Using this method means the robot will not continue moving as long as the obstacle is within the 0.3 meters. If the robot is in front of a wall it will stay there forever, but with a moving obstacle such as a human walking by, the robot will eventually move forward again until the next obstacle. Video of the simulation can be found here.

Implementation 2: Turning away

Using this method, the robot is not instructed to just stop but to also turn away from the obstacle. The added benefit of this approach is that the robot will be able to start moving forward again, as the laser will eventually stop returning obstacles in the set distance.

  • Method 1 Tessa Janssen - In this code, the robot loops over part of the data by setting the for loop from scan.ranges.size()/2 - range to scan.ranges.size/2 + range (setting variable range at 150 means a view of approximately 1.2 rad). Then, when an obstacle is detected in within a set distance e.g. 0.5 m, the forward velocity is set to 0, but the angular velocity is given a positive value. This results in the robot turning left when it has reached an obstacle. When there are no longer any obstacles the angular velocity is set back to 0 and the forward velocity is given a positive value. The downside of always turning left is that the robot can get easily get stuck in corners and will not take into account efficiency. Video of the simulation can be found here.
  • Method 2 Pablo Ruiz Beltri - In this approach, a safety distance and a cone size (in number of laser points) are set. The cone is centered in front of the robot. While we are properly connected, the laser data in this cone is verified to be greater than the safety distance. If it is, the robot moves forward, if not the robot rotates. To keep a forward path, the direction of rotation is determined by assessing if the safety distance violation was in the left or the right of the cone. After testing this approach, it is concluded that it is vulnerable to corners, where the robot is located where safety distance violations come equally from left and right, leaving the robot in a blocked position.
  • Method 3 Thijs Beurskens - Here, the cone of vision of the robot is determined by setting the minimum and maximum angle, which is then translated to the amount of points the for loop should cover. The turning direction of the robot is determined by checking on which side of the robot the obstacles are closer and thus turning where there is more space. This approach has again the benefit of being able to keep moving around a map/obstacle course while also being more clever on where it can go. This approach makes the robot less likely to get stuck in certain areas and makes it more robust to changing environments.

Exercise 2 - Testing your don't crash

During the first testing session it became clear that the parameters that were used in simulation needed to be adjusted to the robot in real life. Especially the velocity needed to be tuned down when using Coco/Bobo as they do not have the speed limiter built in. Moving too fast towards an obstacle meant that when it saw an obstacle within the safety range, it did not have enough time to stop before hitting the obstacle. Though tuning the velocities and obstacle distances was a doable fix and the code from method 3 (Thijs) ended up working very nicely in an environment with multiple different obstacles. The video showing this approach can be seen here.

*INSERT SCREENCAP FROM TEST MAPS??*

Week 2 - Local navigation:

Vector field histogram:

Team 1: Thijs Beurskens, Pablo Ruiz Beltri & Riccardo Dalferro Nucci

Concept

Simulation and Implementation

Reflection

Artificial potential fields:

Team 2: Tessa Janssen, Jose Puig Talavera & Ruben Dragt

Concept

The main idea behind an artificial potential field is calculating both an attractive potential towards a target and a repulsive potential away from obstacles, which are summed together to form the potential field. The resulting force vector is calculated by taking the negative of the gradient of the potential field. The attractive potential is determined by the following formula

[math]\displaystyle{ U_{att}(q) = \frac{1}{2} * k_{a} * (||q-q_{goal}||)² }[/math],

which depends on the difference between the current position of the robot and the target position. The target position was set as a point 6 meters further in the y direction than the starting position, the current pose was calculated by using odometry data from the robot (with starting coordinate (1.5, 0, 0)). Because the odometry is not perfect, this introduced a small error to the final position of the robot, which was not corrected for in this exercise. The repulsive force is given by

[math]\displaystyle{ U_{rep,j}(q) = \frac{1}{2} * k_{rep,j}(\frac{1}{(||q-q_{j}||} - \frac{1}{ρ_{o}})^2 }[/math] if [math]\displaystyle{ ||q-q_{j}|| ≤ ρ_{o} }[/math]

or

[math]\displaystyle{ U_{rep,j}(q) = 0 }[/math] if [math]\displaystyle{ ||q-q_{j}|| ≥ ρ_{o} }[/math],

which uses the laser data to determine the distance from the robot to an obstacle. For this specific implementation, each laser point in the for-loop returning a distance that fell within the predefined space buffer ρo was registered as a separate obstacle, with the corresponding repulsive potential being added to the total potential field. The constants katt and krep were used to adjust the relative importance of attracting and repulsing.

The resulting force vector at the current location of the robot was used as input to the robot by calculating the angle corresponding to the vector. This was used as reference angular velocity for the robot, while y-velocity was kept mostly constant. Mostly constant here means that in the original implementation the velocity of the robot was kept low to allow the robot to have enough time to respond to all obstacles, but after testing, code was added that made sure the robot picked up its speed when there were no obstacles in sight.

Simulation and Implementation

Hero navigating a corridor using the artificial potential fields approach.

Testing the code in simulation revealed that tuning katt, krep, ρo, and the forward and angular velocity were very important. Map 1 compared to map 4, for example, showed that when there are not a lot of obstacles, the robot can get away with higher attraction to the target, making the path more efficient. With more obstacles, however, the robot would get too close to the walls and the krep had to be retuned as well as increasing ρo. In the first few iterations of the code, the robots movements were very jittery. This was due to the calculation of the angle of the force vector. When moving up in difficulty through the different maps, the forward velocity and angular velocity had to be reduced in order to get a smooth trajectory. Screen recordings of the simulation were made for both map 1 and map 4.

One of the first test sessions after the code was working well in simulations, already showed a smooth trajectory as can be seen in the video here. The robot moves around the objects to the target approximately 6 meters away, but at a pace that is quite boring and seems inefficient from a users point of view. Code was then added to make the robot increase its speed when it no longer saw any objects within the buffer ρo. This helps slightly with efficiency, but only once it was already outside of the maze we set up. Another video including this speed increase can be found here.

Reflection

Advantages of the approach

One advantage about the potential fields is that it makes sense intuitively: walk away from obstacles and walk towards your goal. The mathematical components that are required to calculate the potentials, gradients and corresponding vectors are also familiar methods, making it a very doable method to implement from scratch. Another advantage is that the repulsion to obstacles is calculated on real-time laser data. This makes the robot quite robust to changing environments as new obstacles (walls around the corner or people standing in the way) are incorporated into the navigation immediately.

Disadvantages of the approach

A disadvantage that is clear from both the simulation and the test sessions is that in order for the robot to be able to recognize all obstacles in time, its velocity needs to be quite low. This speed-accuracy trade-off is especially evident in more complex environments with many obstacles. And although it is an advantages that the robot updates the obstacles around it constantly, the computational speed is not always sufficient to recognize moving obstacles such as people. In addition, in case of local minima in the gradient, the robot will get stuck while not having reached its target. Another disadvantage that was best seen in the simulation was that the robot most from left to right a lot when passing through a narrow street as the resulting force vector keeps shifting from left to right.

What could result in failure?/How would you prevent these scenarios from happening?

One situation that could lead to failure is a local minimum as explained above. In case the robot is faced with a wall directly between itself and the target and the repulsive potential is equal and exactly opposite to the attraction potential, the result will be net zero and the robot will not move. This might be prevented by including some code that makes sure that if the input to the robot is zero, but it has not yet reached it target it will go in a certain direction. This correction should of course be careful of nearby obstacles. Another potential failure is not recognizing a living obstacle passing by and crashing into it. Completely eliminating this is hard, but it can be mitigated by having a low speed and increasing the range in which it classifies obstacles. Both of these give the algorithm more time to process the obstacle.

Week 3 - Global navigation:

Global navigation 1 - A* algorithm:

Pablo Ruiz Beltri, Riccardo Dalferro Nucci & Ruben Dragt

The A* algorithm is a widely used pathfinding and graph traversal algorithm. It combines the benefits of Dijkstra's Algorithm and Greedy Best-First Search to find the shortest path between nodes in a weighted graph. The code follows the next structure:

  1. Initialization:
    • We start with an open set (open_nodes) containing the initial node (_start_nodeID), and another set (closed_nodes) where the visited nodes will be stored, so they are only visited once.
    • Initialize a cost map to keep track of the cost from the start node to each node (h-cost), using calculate_distance().
    • Initialize a heuristic map estimating the cost from each node to the goal (g-cost), this are initialized to infinity and will be calculated for each visited node to the surrounding nodes. This way the nodes with no edge connecting them will have infinite value.
    • The f-cost is the summation of the g-cost and h-cost. The f-cost is calculated for the initial node.
  2. Processing:
    • While the open set is not empty, select the node with the lowest f-cost from the open nodes set.
    • If this node is the goal, the path is found.
    • Otherwise, move the node from the open set to the closed set.
  3. Expansion:
    • For each neighbor of the current node, calculate the tentative g-cost.
    • If this tentative g-cost is lower than the previously recorded g-cost for that neighbor, update it and change the parent node to the current node.
    • Set the f-cost for the neighbor (f = new g + h).
    • If the neighbor is not in the open set, add it.
  4. Repeat:
    • Continue the process until the goal is reached or the open set is empty, indicating no path exists.
  5. Store the optimal path:
    • To store the optimum path from the initial node to the goal (path_node_IDs), we start at the goal node and move to its parent node, continuing the storing of parents node until this becomes the inital node.
Global navigation 2 -: Probabilistic Road Map

Tessa Janssen, Jose Puig Talavera & Thijs Beurskens

Inflate the map

First exercise is to inflate the occupied spaces in the map to adjust for the size of the robot. This was done by looping of the the image by column and row and check if the pixel had a value below the occupancy threshold that was already defined. If this was the case, a matrix of size radius x radius was looped through, centered around the pixel. Then every pixel around was given value 0. Radius here is a variable that can be adjusted based on what robot is used, as Coco/Bobo would need less inflation than Hero.

Check vertices

Check edges