https://cstwiki.wtb.tue.nl/api.php?action=feedcontributions&user=S111845&feedformat=atomControl Systems Technology Group - User contributions [en]2024-03-29T10:22:27ZUser contributionsMediaWiki 1.39.5https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19877Embedded Motion Control 2015 Group 3/Scan2015-06-23T12:15:28Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|right|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
<br />
[[File:open_space.gif|200px|thumb|right|Figure 1) Pico handling open space]]<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19875Embedded Motion Control 2015 Group 3/Scan2015-06-23T12:13:33Z<p>S111845: /* Crossroad */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|right|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19874Embedded Motion Control 2015 Group 3/Scan2015-06-23T12:13:24Z<p>S111845: /* Crossroad */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|right|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=File:Wall_crossroad.png&diff=19873File:Wall crossroad.png2015-06-23T12:11:04Z<p>S111845: uploaded a new version of "File:Wall crossroad.png"</p>
<hr />
<div>In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=File:Wall_Tjunction.png&diff=19872File:Wall Tjunction.png2015-06-23T12:10:41Z<p>S111845: uploaded a new version of "File:Wall Tjunction.png"</p>
<hr />
<div>In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19871Embedded Motion Control 2015 Group 3/Scan2015-06-23T12:08:26Z<p>S111845: /* Crossroad */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|right|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19870Embedded Motion Control 2015 Group 3/Scan2015-06-23T11:55:42Z<p>S111845: /* Collision avoidance */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|right|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19869Embedded Motion Control 2015 Group 3/Scan2015-06-23T11:55:31Z<p>S111845: /* Collision avoidance */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|center|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19868Embedded Motion Control 2015 Group 3/Scan2015-06-23T11:55:20Z<p>S111845: /* Collision avoidance */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|400px|thumb|center|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19867Embedded Motion Control 2015 Group 3/Scan2015-06-23T11:54:36Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
[[File:collision.gif|400px|thumb|center|Figure 1) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19865Embedded Motion Control 2015 Group 3/Scan2015-06-23T11:53:07Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19851Embedded Motion Control 2015 Group 32015-06-23T10:37:26Z<p>S111845: /* Scan block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<br />
<br />
<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
<br />
<br />
<br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would also collect odometry data and laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. The following figure is the result:<br />
<br />
<br />
[[File:Originaldata.png|thumb|left|Difference between odometry and LRF]]<br />
<br />
The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
The noisy signal that was returned by the laser is presented in the next figure. Here, a part of the laser data is picked from a robot that was not moving.<br />
<br />
<br />
[[File:StaticLRF.png|thumb|left|Static LRF]]<br />
<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br />
The composition pattern of the drive block:<br />
[[File:Drive.jpg|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
[[File:Scan_cp_new.jpg|400px|thumb|center|Composition pattern Scan]]<br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
[[File:Composition_Pattern_Decision.png|400px|thumb|center|Composition pattern Decision]]<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Link to mapping page]<br />
<br />
= Localisation =<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
== Run 1 ==<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
== Run 2==<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <-- empty<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19850Embedded Motion Control 2015 Group 32015-06-23T10:35:59Z<p>S111845: /* Scan block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<br />
<br />
<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
<br />
<br />
<br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would also collect odometry data and laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. The following figure is the result:<br />
<br />
<br />
[[File:Originaldata.png|thumb|left|Difference between odometry and LRF]]<br />
<br />
The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
The noisy signal that was returned by the laser is presented in the next figure. Here, a part of the laser data is picked from a robot that was not moving.<br />
<br />
<br />
[[File:StaticLRF.png|thumb|left|Static LRF]]<br />
<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br />
The composition pattern of the drive block:<br />
[[File:Drive.jpg|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
[[File:Scan_cp_new.jpg|400px|thumb|center|Composition pattern Scan]]<br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties of Scan are shown on....<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
[[File:Composition_Pattern_Decision.png|400px|thumb|center|Composition pattern Decision]]<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Link to mapping page]<br />
<br />
= Localisation =<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
== Run 1 ==<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
== Run 2==<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <-- empty<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19849Embedded Motion Control 2015 Group 32015-06-23T10:35:51Z<p>S111845: /* Potential field */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<br />
<br />
<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
<br />
<br />
<br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would also collect odometry data and laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. The following figure is the result:<br />
<br />
<br />
[[File:Originaldata.png|thumb|left|Difference between odometry and LRF]]<br />
<br />
The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
The noisy signal that was returned by the laser is presented in the next figure. Here, a part of the laser data is picked from a robot that was not moving.<br />
<br />
<br />
[[File:StaticLRF.png|thumb|left|Static LRF]]<br />
<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br />
The composition pattern of the drive block:<br />
[[File:Drive.jpg|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
[[File:Scan_cp_new.jpg|400px|thumb|center|Composition pattern Scan]]<br />
<br />
===== Potential field =====<br />
The received laser data from the LRF is split up in x and y vectors, summing them up gives a resultant vector containing the appropiate angle for PICO to follow. Therefore, PICO moves always to the place with the most space. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for setting the velocity. <br />
<br />
In corridors PICO will drive in the middle in a robust manner with the help of the potential field. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker. The detailed information about the decisionmaker can be found here:<br />
<br />
===== Constructing virtual walls =====<br />
At junctions and intersections the current potential field is unable to lead PICO to the desired direction. Therefore, an extra layer is added to the scan data which enables editing of the data that PICO gets to see. The main advantage of introducing this second layer is that the actual measured data is still availble for all different kind of processes used at different blocks. By modifying the data from the extra layer virtual walls are constructed, this will steer pico into the desired direction by using the potential field. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
===== Collision avoidance =====<br />
<br />
To create an extra layer of saftey, collision avoidance has been added on top of the potential field. In general the potential field will avoid collisions, however, constructing virtual walls can fail and let PICO crash into a wall. This collision avoidane is kept simple, when the distance of multiple coextensive LRF beams is below a certain tresshold value, PICO will move in the opposite direction. The usage of multiple beams makes this method more robust. The current parameter for activating collision avoidane is set at 30 centimeters measured from the scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
===== Detecting doors =====<br />
PICO has to be able to detect and open a door to get tot the end of the maze. First, it has to be detected as a possible door or a dead end. The door is detected by using the LRF data, in principle looking for specific profile which qualifies as a door or a dead end. By sending the door request and waiting for a few seconds, PICO will detect whether it is a door or not. If it is a door, PICO must go thru, if not, PICO has to turn around and keep searching.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
[[File:Composition_Pattern_Decision.png|400px|thumb|center|Composition pattern Decision]]<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Link to mapping page]<br />
<br />
= Localisation =<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
== Run 1 ==<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
== Run 2==<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <-- empty<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19831Embedded Motion Control 2015 Group 3/Scan2015-06-23T10:22:09Z<p>S111845: /* Avoidance collision */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference and therefore this is a good method of finding the reference point to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19805Embedded Motion Control 2015 Group 32015-06-23T09:23:09Z<p>S111845: /* Scan block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<br />
[[File:behaviour_diagram.png|400px|thumb|center|Blockdiagram for connection between the contexts]]<br />
<br />
=== Software architecture ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:Picture1.jpg|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would also collect odometry data and laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. The following figure is the result:<br />
<br />
<br />
[[File:Originaldata.png|400px|thumb|center|Difference between odometry and LRF]]<br />
<br />
The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
The noisy signal that was returned by the laser is presented in the next figure. Here, a part of the laser data is picked from a robot that was not moving.<br />
<br />
<br />
[[File:StaticLRF.png|400px|thumb|center|Static LRF]]<br />
<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
[[File:Scan_cp_new.jpg|400px|thumb|center|Composition pattern Scan]]<br />
<br />
===== Potential field =====<br />
The received laser data from the LRF is split up in x and y vectors, summing them up gives a resultant vector containing the appropiate angle for PICO to follow. Therefore, PICO moves always to the place with the most space. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for setting the velocity. <br />
<br />
In corridors PICO will drive in the middle in a robust manner with the help of the potential field. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker. The detailed information about the decisionmaker can be found here: <br />
<br />
<br />
===== Constructing virtual walls =====<br />
At junctions and intersections the current potential field is unable to lead PICO to the desired direction. Therefore, an extra layer is added to the scan data which enables editing of the data that PICO gets to see. The main advantage of introducing this second layer is that the actual measured data is still availble for all different kind of processes used at different blocks. By modifying the data from the extra layer virtual walls are constructed, this will steer pico into the desired direction by using the potential field. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
===== Collision avoidance =====<br />
<br />
To create an extra layer of saftey, collision avoidance has been added on top of the potential field. In general the potential field will avoid collisions, however, constructing virtual walls can fail and let PICO crash into a wall. This collision avoidane is kept simple, when the distance of multiple coextensive LRF beams is below a certain tresshold value, PICO will move in the opposite direction. The usage of multiple beams makes this method more robust. The current parameter for activating collision avoidane is set at 30 centimeters measured from the scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
===== Detecting doors =====<br />
PICO has to be able to detect and open a door to get tot the end of the maze. First, it has to be detected as a possible door or a dead end. The door is detected by using the LRF data, in principle looking for specific profile which qualifies as a door or a dead end. By sending the door request and waiting for a few seconds, PICO will detect whether it is a door or not. If it is a door, PICO must go thru, if not, PICO has to turn around and keep searching.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
[[File:Composition_Pattern_Decision.png|400px|thumb|center|Composition pattern Decision]]<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Link to mapping page]<br />
<br />
= Localisation =<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
== Run 1 ==<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
== Run 2==<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <-- empty<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19059Embedded Motion Control 2015 Group 3/Scan2015-06-07T19:26:42Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference and therefore this is a good method of finding the reference point to construct the virtual walls.<br />
<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19057Embedded Motion Control 2015 Group 3/Scan2015-06-07T19:21:48Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
<br />
<br />
<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference and therefore this is a good method of finding the reference point to construct the virtual walls.<br />
<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19056Embedded Motion Control 2015 Group 3/Scan2015-06-07T19:19:51Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
<br />
<br />
<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference and therefore this is a good method of finding the reference point to construct the virtual walls.<br />
<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19055Embedded Motion Control 2015 Group 32015-06-07T19:10:46Z<p>S111845: /* Avoidance collision */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the place with the most space. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for setting the velocity. <br />
<br />
In straight corridors PICO will drive in the middle in a robust manner with the help of the potential field. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker. <br />
<br />
<br />
===== Constructing virtual walls =====<br />
At junctions and intersections the current potential field is unable to lead PICO to the desired direction. Therefore, an extra layer is added to the scan data which enables editing of the LRF data that pico will see. The main advantage of introducing this second layer is that the actual measured data still is availble for all different kind of processes used at different blocks. By modifying the data virtual walls are constructed, this will steer pico into the desired direction by using the potential field. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
===== Collision avoidance =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19052Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:46:56Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
<br />
<br />
<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima indicate that <br />
<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19051Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:43:13Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima indicate that <br />
<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19050Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:30:56Z<p>S111845: /* Potential field */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19049Embedded Motion Control 2015 Group 32015-06-07T18:28:10Z<p>S111845: /* Drive block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the place with the most space. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for setting the velocity. <br />
<br />
In straight corridors PICO will drive in the middle in a robust manner with the help of the potential field. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker. <br />
<br />
<br />
===== Constructing virtual walls =====<br />
At junctions and intersections the current potential field is unable to lead PICO to the desired direction. Therefore, an extra layer is added to the scan data which enables editing of the LRF data that pico will see. The main advantage of introducing this second layer is that the actual measured data still is availble for all different kind of processes used at different blocks. By modifying the data virtual walls are constructed, this will steer pico into the desired direction by using the potential field. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
===== Avoidance collision =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19048Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:26:19Z<p>S111845: /* Crossroad */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls. <br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:crossroad.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=File:Crossroad.png&diff=19047File:Crossroad.png2015-06-07T18:25:47Z<p>S111845: </p>
<hr />
<div></div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19046Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:24:07Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls. <br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
[[File:T-junction.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=File:T-junction.png&diff=19045File:T-junction.png2015-06-07T18:23:07Z<p>S111845: uploaded a new version of "File:T-junction.png"</p>
<hr />
<div></div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19044Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:22:51Z<p>S111845: /* T-junction */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls. <br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19043Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:16:09Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls. <br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the virtual wall.</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19042Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:13:44Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls. <br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different since with the above method the minima between the two maxima will not represent an corner. However locating this minima is again usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the walls</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19041Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:13:14Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls. <br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different since with the above method the minima between the two maxima will not represent an corner. However locating this minima is again usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the walls</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19040Embedded Motion Control 2015 Group 3/Scan2015-06-07T18:10:36Z<p>S111845: /* Crossroad */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different since with the above method the minima between the two maxima will not represent an corner. However locating this minima is again usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the walls</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19039Embedded Motion Control 2015 Group 3/Scan2015-06-07T17:56:46Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
<br />
<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smooth through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
<br />
In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius. <br />
<br />
===== T-junction =====<br />
<br />
In the case of a T-junction the situation is slightly different since with the above method the minima between the two maxima will not represent an corner. However locating this minima is again usefull, dependent on the kind of turn 100+n or 100-n a radius is computed which will represent the walls</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19038Embedded Motion Control 2015 Group 3/Scan2015-06-07T17:18:50Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. In between there are two minima that represent the corners between them. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
The picture above gives insight in how to detect corners between maxima. These provide pico with good reference points which will later be used for constructing virtual walls, but for now are very usefull for distinction of different kind of junctions. <br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19037Embedded Motion Control 2015 Group 3/Scan2015-06-07T17:16:45Z<p>S111845: /* Crossroad */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. In between there are two minima that represent the corners between them. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
The picture above gives insight in how to detect corners between maxima. These provide pico with good reference points<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19036Embedded Motion Control 2015 Group 3/Scan2015-06-07T17:15:18Z<p>S111845: /* Detection intersections */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
===== Crossroad =====<br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. In between there are two minima that represent the corners between them. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). <br />
<br />
[[File:detection.png|400px|thumb|center|The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
<br />
===== T-junction =====<br />
<br />
<br />
<br />
===== Open space =====<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19035Embedded Motion Control 2015 Group 32015-06-07T17:08:39Z<p>S111845: /* Constructing virtual walls */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
[[File:Virtualwalls.png|400px|thumb|center|Schematic figure of virtual walls]]<br />
<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the place with the most space. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for setting the velocity. <br />
<br />
In straight corridors PICO will drive in the middle in a robust manner with the help of the potential field. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker. <br />
<br />
<br />
===== Constructing virtual walls =====<br />
At junctions and intersections the current potential field is unable to lead PICO to the desired direction. Therefore, an extra layer is added to the scan data which enables editing of the LRF data that pico will see. The main advantage of introducing this second layer is that the actual measured data still is availble for all different kind of processes used at different blocks. By modifying the data virtual walls are constructed, this will steer pico into the desired direction by using the potential field. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
===== Avoidance collision =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19034Embedded Motion Control 2015 Group 32015-06-07T17:08:25Z<p>S111845: /* Scan block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
[[File:Virtualwalls.png|400px|thumb|center|Schematic figure of virtual walls]]<br />
<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the place with the most space. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for setting the velocity. <br />
<br />
In straight corridors PICO will drive in the middle in a robust manner with the help of the potential field. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker. <br />
<br />
<br />
===== Constructing virtual walls =====<br />
At junctions and intersections the current potential field is unable to lead PICO to the desired direction. Therefore, an extra layer is added to the scan data which enables editing of the LRF data that pico will see. The main advantage of introducing this second layer is that the actual measured data still is availble for all different kind of processes used at different blocks. By modifying the data virtual walls are constructed, this will steer pico into the desired direction by using the potential field. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to scan page]<br />
<br />
<br />
===== Avoidance collision =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19033Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:55:20Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Intersection<br />
# T-junction<br />
# Open space<br />
<br />
As an example consider an intersection, the left <br />
<br />
[[File:detection.png|400px|thumb|center|The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19032Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:53:12Z<p>S111845: /* Detection intersections */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
1. Intersection<br />
2. T-junction<br />
3. Open space<br />
<br />
As an example consider an intersection, the left <br />
<br />
[[File:detection.png|400px|thumb|center|The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]]<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19031Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:51:48Z<p>S111845: /* Detection intersections */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
1. Intersection<br />
2. T-junction<br />
3. Open space<br />
<br />
As an example consider an intersection, the left <br />
<br />
[[File:detection.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19030Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:43:56Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is completed. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping].<br />
<br />
<br />
<br />
[[File:detection.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Constructing virtual walls ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19029Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:40:35Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
=== Avoidance collision ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the oppositie direction.<br />
<br />
<br />
=== Constructing virtual walls ===<br />
<br />
<br />
=== Detection intersections ===<br />
<br />
[[File:detection.png|400px|thumb|center|CP of Drive]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=File:Detection.png&diff=19028File:Detection.png2015-06-07T16:24:51Z<p>S111845: </p>
<hr />
<div></div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19027Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:23:57Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
=== Avoidance collision ===<br />
<br />
=== Constructing virtual walls ===<br />
<br />
<br />
=== Detection intersections ===<br />
<br />
[[File:detection.png|400px|thumb|center|CP of Drive]]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19026Embedded Motion Control 2015 Group 32015-06-07T16:22:58Z<p>S111845: /* Potential field */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
[[File:Virtualwalls.png|400px|thumb|center|Schematic figure of virtual walls]]<br />
<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. <br />
<br />
By this, we want to detect the walls of course. This will be used to find corridors, doors and all kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity. <br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to scan page]<br />
<br />
===== Constructing virtual walls =====<br />
<br />
<br />
===== Avoidance collision =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19025Embedded Motion Control 2015 Group 3/Scan2015-06-07T16:19:38Z<p>S111845: /* Scan */</p>
<hr />
<div>= Scan = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
In order to solve the maze successfully, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
=== Avoidance collision ===<br />
<br />
=== Constructing virtual walls ===<br />
<br />
<br />
=== Detection intersections ===</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19024Embedded Motion Control 2015 Group 32015-06-07T16:14:26Z<p>S111845: /* Scan block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
[[File:Virtualwalls.png|400px|thumb|center|Schematic figure of virtual walls]]<br />
<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. <br />
<br />
By this, we want to detect the walls of course. This will be used to find corridors, doors and all kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity. <br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to scan page]<br />
<br />
<br />
===== Constructing virtual walls =====<br />
<br />
<br />
===== Avoidance collision =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19023Embedded Motion Control 2015 Group 32015-06-07T15:49:09Z<p>S111845: /* Scan block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
<br />
* Determine situation<br />
* Decision making <br />
* Skill selection<br />
<br />
<pre style="color:red">(don't know for sure because in an old version was this:)<br />
* Drive<br />
* Turn<br />
* Scan<br />
* Wait<br />
</pre><br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
<br />
* Drive<br />
* Rotate<br />
* Scan environment<br />
* Handle intersections<br />
* Handle dead ends<br />
* Discover doors<br />
* Mapping environment<br />
* Make decisions based on the map<br />
* Detect the end of the maze<br />
<br />
<pre style="color: red"><br />
* Drive to location<br />
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. <br />
* Check for doors<br />
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.<br />
* Locate obstacles<br />
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.<br />
* Map the environment<br />
* Store the relative positions of all discovered object and doors and incorporate them into a map.<br />
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.<br />
</pre><br />
<br />
[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts (update?)]]<br />
<br />
=== (Specifications) ===<br />
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.<br />
<br />
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. <br />
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.<br />
<br />
=== Structure ===<br />
<br />
The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:<br />
<br />
[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]<br />
<br />
=== Calibration ===<br />
Calibration part (if we want)<br />
<br />
...<br />
<br />
...<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns.<br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. However, the potential field was the most robust and easiest method.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Link to drive page]<br />
<br />
[[File:Virtualwalls.png|400px|thumb|center|Schematic figure of virtual walls]]<br />
<br />
<br />
The composition pattern of the drive block:<br />
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]<br />
<br />
=== Scan block ===<br />
The block Scan processes the laser data of the Laser Range Finders. <br />
<br />
<br />
<br />
By this, we want to detect the walls of course. This will be used to find corridors, doors and all kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
===== Potential field =====<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity. <br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to scan page]<br />
<br />
<br />
===== Constructing virtual walls =====<br />
<br />
<br />
===== Avoidance collision =====<br />
<br />
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.<br />
<br />
=== Decision block ===<br />
The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
Input: <br />
* Mapping model<br />
* Scan data<br />
<br />
Output: <br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)<br />
<br />
<pre style="color: red"><br />
Different situations when visiting a node<br />
* If it is a dead-end node<br />
* Did the door open for me<br />
* Any unvisited paths<br />
* Any paths with 1 visit<br />
* Paths with 2 visit (not a choice)<br />
</pre><br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Link to decision page]<br />
<br />
=== Mapping block ===<br />
This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Link to mapping page]<br />
<br />
=== Localisation ===<br />
Not far enough yet<br />
<br />
...<br />
<br />
...<br />
<br />
=== Integration ===<br />
....<br />
....<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration]</div>S111845