Mobile Robot Control 2021 Group 4: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
No edit summary
 
(48 intermediate revisions by 4 users not shown)
Line 5: Line 5:
= Group Members =
= Group Members =


{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; color: black;"
Luuk Verstegen - 1252488
|-
! scope="col" | '''Name'''
! scope="col" | '''Student Number'''
|-
| T.J.M. Snijders || 1017557
|-
| B.P.J. Reijnen || 0988918
|-
| J.H.B. de Zwart || 1020347
|-
| S.C.M. Mennen || 1004332
|-
| A.C.C.E. Vissers || 0914776
|-
| B. Godschalk || 1265172
|}


Bob Bindels - 1246348


Stijn van den Broek - 1252011


= Introduction =
Lars van Dooren - 1249169
This Wiki-page reports the progress made by Group 1 towards completion of the Escape Room Challenge and the Hospital Challenge. The goal of the Escape Room Challenge is to escape a rectangular room as fast as possible without bumping into walls. The goal of the Hospital Challenge is to deliver medicines from one cabinet to another as fast as possible and without bumping into static and dynamic objects.
This year, due to COVID-19, all meetings and tests are done from home. The meeting schedule is shown at the end of the wiki. The software for the autonomous robot (PICO) is written in C++ and applied to a simulation environment since there has been no interaction with PICO. The simulation environment should represent the real test as closely as possible and therefore the information on this page should be applicable to the real test.


Tjeerd Ickenroth - 1232296




= Design document =
In order to get a good overview of the assignment a design document was composed which  can be found [[:File:4SC020___Design_Document-v1.pdf|here]]. This document describes the requirements, the software architecture which consists of the functions, components and interfaces and at last the system specifications. This document provides a guideline to succesfully complete the Escape Room Challenge.


= Escape Room Challenge =
= Introduction =
The Escape Room Challenge consists of a rectangular room with one way out. PICO will be placed on a random position on a map which is unknown to the challengers before taking the challenge. PICO should be able to detect the way out by itself using the Laser Range Finder (LRF) data and drive out of the room without hitting a wall using its three omni wheels.


To create a solid strategy a number of options have been outlined. Two solid versions were considered; wall following and a gap scan version. After confirming that the gap scan version can, if correctly implemented, be a lot faster than following the wall, this option was chosen. Wall following does not require much understanding of the perception of PICO. By constantly looking at the wall and turning if a corner is reached, PICO will move forward as soon as no wall is visible in front of it indicating that the corridor is found. One other reason to use gap detection over wall following as a strategy is that the gap detection and moving to certain targets can later on be used in the Hospital Challenge. In the Hospital Challenge multiple rooms are present, objects are placed in the rooms and cabinets have to be reached which make wall following unapplicable.  
= Design Document =
A Design Document is created in order to elaborate the approach of the Escape Room challenge (also some Hospital challenge aspects). The document includes several topics such as the requirement, components, specifications, functions and interfaces. Download the file here: [http://cstwiki.wtb.tue.nl/images/4SC020_Mobile_Robot_Control.pdf Design Document]


The code is divided in multiple parts, or functions, which work together to complete the task. These functions are separated code-wise in order to maintain a clear, understandable code. Next, the state machine of PICO is elaborated. Finally, the simulations will be discussed together with the result of the Escape Room Challenge.


== Functions ==
= Escape Room Challenge =
 
The task of the escape room challenge is to find the corridor and exit the room through it. Since our group did not have much experience with C++ programming, it was decided to split up the work into a robust method (2 people) and a less robust method (3 people):
=== Finding walls and gaps ===
Using the LRF, data points are obtained. To differentiate between the walls and the exit a least squares regression line algorithm has been written. This algorithm fits a line over all found data points in order to find a wall in the field of view (FOV) of the LRF. By connecting the walls together it can be possible that a gap occurs, which will be saved into the global reference frame. To collect the data, PICO starts with a 360 degree FOV rotation, while doing so collecting all gaps. When the scan stops, all the collected gaps are checked based on some criteria to see if it is an exit and if it is the best exit possible and, if needed, fixed (straightened based on the adjacent walls). This function is also used in the Hospital Challenge and elaborated in the section Perception.
 
=== Path planning ===
To let PICO move from the start location to the exit, a path was generated for PICO to follow. This path consisted out of two points, a pre target (denoted as target B) before the exit gap and a target in the center of the gap (denoted as target A). See Figure 1. This section describes how these points were computed.
 
[[Image:Pathplan2.PNG | thumb | 400px | center | Figure 1: Path planning]]
 
If a gap has been found by PICO, the corners of this gap are obtained. From this, target A can easily be computed by taking the average in both x and y direction. Target B is then placed perpendicular to the gap on a distance of 0.4 meters, which is the width of PICO. This is done to prevent PICO from attempting to drive to his target through a wall.
Target B was computed as follows. First, the gap and the line between points A and B were denoted as a vector representation
[[Image: vectorBgroup12020.png | frameless | 200px | center ]]
Then, a and b were computed. Since it is known that the vectors are orthogonal, the inner product should be zero, resulting in
[[Image: innerproductBgroup12020.png | frameless | 200px | center ]]
Depending on the values of the x and y coordinates of the corners of the gap, either a or b was set to the value 1 in order to compute the other value. With a and b known, the direction of the vector AB is known. The length of this vector is also known, hence equating the two norm of the vector to the length of the vector results in
[[Image: normBgroup12020.png| frameless | 200px | center ]]
After these computations are completed, two possible locations for point B can be obtained. One before and after the gap. To determine the correct point, the distance from PICO to both points was computed. The point with the smallest distance will always be the point before the gap and thus the correct point B. Note that this will hold only when the room from which PICO needs to escape is a rectangle or when the corners of the gap are obtuse angles.
 
=== Potential field corridor ===
When PICO drives in the corridor, PICO starts to monitor the left and right wall to center itself between the walls of the corridor while moving towards the finish line. This is done using proportional control.
 
== State machine ==
 
In Figure 2, the flow chart is depicted which starts at 'Inactive' and ends at 'Move forwards' until PICO stops.
 
The first step that will be executed is the 'Initialize' step. This step will be used to clear and set all variable values (the odometry data from the wheels as well as the LRF data) to the default state from which it will continue to the '360° scan', where using the LRF data all walls and possible gaps are detected. In this scan all LRF measurements will be mapped into wall objects if obtained data points connect. When data points do not connect, a gap occurs which can be a possible exit or a faulty measurement. When the scan can not find any gaps in the walls, PICO will move 1 meter to the furthest point measured in the scan and performs a new scan. If, for any reason, PICO tends to walk into a wall, the robot will move backwards and starts a new scan.
 
The next step 'Gap?' is checking if the gaps found can be considered as an exit. If no gaps meet the specifications of a gap, the next step will be 'Move to better scan position' and PICO will redo the previous steps. If a gap meets the requirements of the exit, then PICO will move on to ‘Path Planning, which will be executed to calculate the best possible way to the exit. When the path is set the following step will be 'Move to gap'.
 
The final step will be 'Center?' which will center the robot in the corridor of the exit to prevent that incorrect alignment with the exit will result in a crash with the walls.
 
Parallel to this flow PICO will continuously keep track of objects with the laser range finder. Whenever a object is detected in a specific range of the robot a flag will be thrown. This flag can be used to update the path planning, which will prevent crashes when the robot is heading in the wrong direction.
 
[[Image:Group1_ER_Strategy_update_1.jpg | thumb | 750px | center | Figure 2: Strategy for Escape Room Challenge ]]
 
== Simulation ==
The result of the strategy and the implemented functions is shown in Figure 3 where one can see that a gap is detected. PICO moves to a point in front of the opening and rotates towards the opening. While driving through the corridor, PICO keeps its distance to the walls.
 
[[File:Group1_ER_Simulation_1.gif| frame | 100px | center | Figure 3: Simulation in emc-sim for Escape Room]]
 
In Figure 4 one can see how PICO behaves if no gaps are detected. PICO keeps on moving 1 meter from its scan location to scan again.
 
[[File:Group1_NoGapRoutine.gif| frame | 100px | center | Figure 4: Escape Room no gap simulation in emc-sim]]
 
== Escape Room Challenge results ==
The challenge attempt can be seen in Figure 5. As can be seen in the scanning, the robot rotates in a smooth counterclockwise rotation in which the gap detection algorithm detects the exit. After validation of the gap, a pre-target and target are placed in relation to the detected gap location. However, the movement towards it went wrong. Normally, the motion component of the software calculates the smallest rotation between the angle of PICO and the pre-target & target of the gap. During the development and validation of the written software in the simulator, this component worked as expected. However, during the challenge it emerged that the calculation did not work when comparing positive and negative angles. As a result, the calculation showed that a positive angle had to turn into the negative direction in order to reach its target. Because of this problem PICO started to rotate 270 degrees to the right instead of 90 degrees to the left. This problem came up 2 times, both after the 360 degrees scan and after reaching the pretarget.
 
Immediately in the first attempt PICO moved out of the room in 55 seconds. However, PICO decided to align itself with the reference angle using the largest rotation which costed a lot of time. In the end group 1 is proud of the result.
 
After the challenge we looked at this problem and applied a possible solution. If the simulation would be performed again with the right rotations the simulation time could be reduced to 25 seconds which would have resulted in a 2nd place during the challenge, instead of our 4th place that we were placed this time.
 
In summary, every software component worked fine without failing completely. However, after a bug fix in the motion component, everything works optimally without any known errors. The wall detection and motion software can be used for the hospital challenge and made usable with some additions without starting from scratch again.
 
 
[[file:Group1_ERC_result.gif | frame | 800px | center | Figure 5: Escape Room Challenge live stream result]]
 
 
 
= Hospital Challenge =
The shortage of care services is increasing over time due to the ageing population and is expected to increase for the coming years as well [https://www.cbs.nl/nl-nl/dossier/arbeidsmarkt-zorg-en-welzijn]. Without an increase in the amount of care providers, the chances of burnouts and other health issues for these people will increase significantly. Knowing that a spontaneous increase in healthcare personnel is unlikely, other solutions for work relieve need to be found. One of these solutions can be found in the way of automated robots. These robots can perform relatively simple tasks and thereby assist the care providers. A simple, yet time consuming, task of these care providers is fetching medicines from certain cabinets to give to the patients. The Hospital challenge is based on this task.
 
In this challenge, a robot (PICO in this case) should be programmed in such a way that it can ‘fetch’ and ‘deliver’ items from one place to another. The fetching and delivering is for this challenge not physically possible, therefore a specific procedure needs to be followed which is specified in the problem description [http://cstwiki.wtb.tue.nl/index.php?title=Mobile_Robot_Control#Hospital_Competition]. Logically, this robot should be able to move without bumping into either static (i.e. walls or doors) or dynamic (i.e. people) objects and it should work as fast as possible. Also, this robot should be able to perform these tasks with only the order of cabinets to visit as input.
 
 
== Design architecture ==
Like for the Escape room challenge, the first step of the Hospital challenge was to determine the internal information exchange in a clear way. This meant only specifying the main components with its main functions. The goal for the Escape room challenge was to define an architecture which could be used for the Hospital challenge as well. When discussing the structure of the previous architecture it became clear that it would not hold for the Hospital challenge. Reasons for altering the architecture were ambiguities in the goals of certain components and the underestimation of the size of the Perception component. Ambiguities mainly occurred in de information transfer in the strategy component and into the movement control component. Previously, it was assumed that Movement Control needed the current state from the Final State Machine (FSM). Later it was decided that only the target would suffice.  Also, by splitting Monitoring from Perception as a separate component it was ensured that the code would remain clear. The Perception component was relatively small during the Escape room challenge, therefore it was assumed Monitoring could be a main function of Perception in the Hospital challenge as well. However, during the discussions for the design architecture for the Hospital challenge it became clear this assumption was wrong. Perception was expected to contain a considerable amount of functions more. By setting Monitoring as a separate component, Perception would become smaller, thus clearer to read. This also maps with the paradigms explained in this course. Namely, once a code starts becoming more complex, it should remain explainable [3].
[[Image:designarchitecture.png | thumb | 750px | center | Figure 6: Design architecture ]]
The software architecture for the Hospital challenge thus has one more component compared to the Escape room challenge, in the form of monitoring. Additionally, the transfer of data from the output of Strategy/Planning to the World Model and the transfer of data from World Model to Movement Control is changed. These changes made it clear that if a component is expected to grow significantly in size, it is wise to look for functions inside that component that can be separated from that component. Discussing the internal data transfer between components more thoroughly also became a learning point, since it needed to be changed after the Escape room challenge.
 
The final Design architecture is thus formed by the components Perception, Monitoring, World Model, Visualization, Movement Control and Strategy/Planning. These components, combined with their main functionalities and their inputs and outputs, can be found in Figure 6. For instance, one of the main functions of the Strategy/Planning component is the FSM, which determines the state PICO is in and what it should do next. As input it needs, among other things, the current position of PICO. As output it sends the state to the World Model and a goal to the Path planning function of Strategy/Planning. Whether it is input or output can be determined by the direction of the arrow.
 
All the components and their functionalities, mentioned in the Design architecture, are explained more elaborately in the following sections.
 
== Components ==
In the following sections, the different components and its main functions are explained elaborately. Furthermore, the testing phase and final challenge results are discussed.
 
=== Finite state machine ===
To be able to navigate and do necessary actions during the hospital challenge, a finite state machine has been composed. The FSM combines all the functions written in all the different components in a predefined sequence to be able to go to different cabinets in the order provided at startup. The FSM will be written in a switch case structure which will do specific actions in each state, and depending on the transition conditions move through it. In Figure 7, the FSM is visualized in a flow state graph which shows the connection between the different state. The description and transition conditions are explained below.
 
[[Image:Group_1_State_flow.PNG | thumb | 800px | center | Figure 7: FSM state structure and connection ]]
 
'''States'''
* '''INIT''': During the init state all variables used within the FSM are set to their default values. This state is used to be able to recover from crashes within the program, more specific, from a unknown state or an exception within the FSM. By resetting all the variables withing the FSM, the program can continue its operation like nothing happened.
* '''IDLE''': The Idle state is used to set PICO into Idle mode when the program has finished the cabinet sequence. If this state is executed from startup, the state will do nothing else then moving to the next state.
* '''LOCALIZE''': Before PICO can be moved within the environment the location of PICO needs to be estimated. By rotating and translating PICO it would generate enough data for the perception module to localize PICO.
* '''GETTARGET''': This state is used to get the next cabinet ID from the provided sequence and setting the goal position where PICO should be to pickup or deliver the medicines.
* '''CALCULATEPATH''': The A* pathfinding algorithm is executed which will produce a path based on the current position of PICO and the goal received from the ‘gettarget’ state.
* '''MOVE''': Move will walk over the generated path by the ‘calculatepath’ state which contains a set of positions to move to to get at the goal.
* '''FOCUS''': When PICO arrives at the cabinet the orientation of PICO should be facing towards the closest cabinet side.
* '''ARRIVED''': Remove reached cabinet from sequence and trigger the export event for the LRF data.
* '''FINISHED''': All the cabinets from the provided sequence are visited when this state is executed. This state is used to notify the user and allows to exit the program.
* '''RESETGRID''': When no path can be produced between PICO and the current goal the grid will be reset to the static layout of the map. By doing this possible false positives can be removed from the known data which will enable PICO to create a new path.
* '''RAGDOL''': Fail-safe to prevent deadlocks when a dynamic object is preventing path generation. This can occure when an dynamic object is standing in a doorway or when the object pushes PICO into a corner and restrict the movement of PICO completely.
* '''ROTATEANDSCAN''': This state will be executed when PICO can’t move towards the direction it is facing due to a door or static object or when the localization has not been updated enough. It can be that the LRF data on that position is not good enough to meet the requirements for object recognition, by rotating PICO the LRF data can give more information about the surroundings. Another effect of this is that the localization algorithm uses this data to update the position of PICO.
 
'''Transition conditions'''


* '''A''': Finished flag is false.
* Robust: the first method is based on line segments and corner detection. From the LRF data the angle indices corresponding to start- and endpoints of line segments can be found. From these indices and the corresponding distances the location of the corridor entrance can be determined. The robot should drive to this position and from there on align itself with the walls of the corridor and drive parallel to the walls until the finish is reached. Since not a lot from the Wiki example can be used here, it was expected that this method would take the most time to get it properly working.
* '''B''': Localization algorithm returns true when location is known.
* '''C''': Cabinets left in the commited sequence.
* '''D''': Path exist to selected cabinet.
* '''E''': Arrived at the end of the path.
* '''F''': Alligned with cabinet side.
* '''G''': No cabinets left from the commited sequence.
* '''H''': Finished flag is true.
* '''I''': PICO is already on the goal position.
* '''J''': Could not create an existing path to the goal position.
* '''K''': Grid resetted attempt is under 10 times.
* '''L''': Grid resetted for the 10th time.
* '''M''': Ragdol has been active for 5 seconds.
* '''N''': Monitoring component detects less then 0.25 meter of movement in 7 seconds.
* '''O''': Scan modus has been active for 3 seconds.
* '''P''': Monitoring component flags that path to goal is obstructed.


=== World model ===
* Less robust: the second method is based on the full example which has been placed on the Wiki. Here, wall detection, drive control and a finite state machine are already implemented and therefore it is relatively easy to adapt. The method for finding the corridor in this method is a relatively easy one: when a sudden distance jump is found between one LRF index and the next, then the corridor is found. To make sure that once the robot has found this sudden distance jump and drives towards the corridor, mostly changes in the finite state machine are required.
The worldmodel component of the project lays down the foundation of the coding structure which is used since the beginning of the course. Information that had to be shared between the different components was written to and read from this class. This structure allowd that the components could be constructed without dependencies of other components. A summery of the data that is stored within the worldmodel:
*'''World model data''': static layout of the map divided into walls, cabinet sides and corners where convex and concave types are saved seperated.
*'''Localization data''': current position and odom travel from last known position.
*'''Perception data''': LRF data, the detected walls, the detected corners which are divided in convex and concave lists, the detected static objects, resulting potentialfield vector and the closest point from PICO.
*'''Strategy data''': 2D grid, path nodes and string pulled path nodes.
*'''Monitoring data''': Error flags for path obstruction and movement check.


==== Robust method ====
For the first method, line segmenting is implemented. This is done as follows:


[[Image:Group1_wm_json_format.PNG | thumb | 200px | left | Figure 8: JSON data structure]]
'''1.''' First, a line is drawn between the start point with index 0 and the end point with index 999.


In addition to sharing and storing data of all the other components, the worldmodel was also responsible for loading and parsing the JSON map to a usable format. Loading the map from the JSON file was done by using the opensource JSON for Modern C++ library written by Niels Lohmann [https://github.com/nlohmann/json|[4]]. Based on the available information from the file structure of the map from 2019 a format has been constructed to be used in the project. Seen from Figure 8, the JSON file consisted of three datatypes which would be delivered one week before the challenge; a list of corner points in world space location, a list of connected corner points seen as walls, and a list of objects representing the cabinets which consisted a list of connected corner points that made up the four sides for each cabinet. The left bottom corner of the map was considered as the world frame origin with position (0,0). Besides that, the ID of the corners started at the origin with an index of 0 and persistently increased by one when moving to the next one. By building on this logic it was possible to define closed polygon shapes by setting end points for each one based on specified corner indexes. This functionality is used to determine for each corner if it had a convex or concave property, which is used within the perception module to stabilize the localization algorithm. The loading sequence of the walls and cabinets are used to distinguish the inner walls and cabinet sides from the external walls. This was done by defining a normal vector direction based on the loading order of the corners that made up the wall or cabin side. The usage of these functionalities will be explained in more detail in the perception section.  
'''2.''' Then, for each LRF data point, the distance perpendicular to the line drawn in step 1 is determined and both index and distance of the point corresponding to the largest distance are saved.


[[Image:Group1_Reference_frames.jpeg | thumb | 400px | right | Figure 9: Reference frame visualization]]
'''3.''' If the distance lies above a certain threshold value, then the point found in step 2 does not lie on the line drawn between points 0 and 999. In that case, the process is repeated from step 1 again, however now the end point is changed to the index determined in step 2. If the distance lies below the threshold value, then there is a line detected between the start and end point stated in step 1. The process is then repeated with the end point of the current iteration now being the start point of the next iteration.


Storing the corners from the JSON file was done in a Vector3D object. This object was created to store three double values, namely the x and y position in the world frame and the third one was used to store the corner type; convex (pi/2) or cancave (3pi/2). The walls and cabinets are stored seperatly into a vector in the worldmodel as Line2D objects. The Line2D object is a class that stored two Vector3D objects which represented the begin and end point of a wall of cabin side. Next to storing the two positions it was possible to get the length of the line and getting the normal of it.  
'''4.''' The steps above are repeated until the start point index is equal to 999, which means that all line segments have been found.


By now, the project contained three coordinate frames, where one  was independent of the other two, namely: PICO’s own and global reference frame (seen from PICO and its initial starting position within the world) and the world frame (loaded from the JSON file). To get everything working together in the same frame, the world frame was used as a global reference frame to make it straightforward when doing calculations. By doing this in the same frame for all components the amount of coordinate transformations is reduced to a bare minimum which reduces the complexity of the calculations and is easilier to explain. As said before, the global coordinate frame started at (0, 0) at the left bottom of the map, which was chosen such that their x- and y-axis were positive defined to the right and upwards. Next to that was PICO’s own frame where the x-axis was positive upwards and their y-axis was positive defined to the left of PICO. Both are shown in Figure 9 which also explains the relationship between them, here the world reference frame position is given in orange, the PICO’s global reference frame is given in red and PICO’s frame is expressed in green.
Note that since all the start and end points of the line segments are saved, also the corner indices are known. A problem which had to be dealt with using this method is the fact that when the robot is driving in the open-ended corridor, certain distances in the LRF data are close to zero since the laser beams sent outside the corridor are not reflected by a wall or object. This resulted in the robot detecting the indices corresponding to angles pointed outside the corridor as corners, which obviously is not the case. Therefore, at each instance the robot receives new LRF data, the indices which correspond to a distance smaller than 0.1 m are stored and later taken out of the array of corner indices. The result is that only the indices of the actual corners remain.


One week before the hospital challenge the map of this year was released (the heightmap is shown in Figure 10), unfortunately the data [http://cstwiki.wtb.tue.nl/images/Mrc2020_finalmap_corrected.zip|[5]] was not in the same format as last year [http://cstwiki.wtb.tue.nl/images/Finalmap2019.zip|[6]], negative coordinates were provided, which were not taken into account when writing the loading sequence. So, before the file could be used, small modification had to be executed in such a way that the data had the same format as the map from previous year. One of the modifications was shifting all world coordinates up until everything had a positive value. Another modification that has been done is renumbering the indexing of the corner ID’s in such a way that the first corner started at 0 and was sequentially increased by one when going to the next corner. And the last was again reversing the corner index numbering of the internal walls and cabin sides.  
Another problem, which was not solved at the time of the challenge, is the fact that not between all subsequent corner indices a line should be drawn. For example, when the robot is driving straight through the corridor it detects the ends of the left and right wall as corners, however in between these corners there is no wall. This distinction should be made such that the robot knows where a wall is located and where not, such that he can align to the walls in the corridor. At the time of the challenge it was not yet figured out how this should be done.


[[Image:Group1_wm_maplayout.PNG| thumb | | right |Figure 10: Visualization of heightmap used]]
For the detection of the corridor two methods where thought of:
* The first method for detecting the corridor is based on a jump in distance between subsequent corner indices which are within a certain index bound w.r.t. each other, this is visualized in Figure 1. When this jump is detected, the robot should rotate and drive towards the corner with the smallest distance and from there on perform a similar kind of movement as explained in the less robust method. When detecting the corridor in this way the main difference between the robust and less robust method is that the robust method only checks for a distance jump between the corner indices instead of all indices, thus saving computation time. A disadvantage of this method is that the robot cannot detect the corridor when it is right in front of the corridor, since then the index gap between the left and right corner is out of the specified bound.


In order to create a bridge between the loaded JSON map and the collected data from the perception module towards the strategy component a 2D grid has been constructed. The grid is used to combine all the different collected pieces of data such that the strategy component can calculate the next steps that needed to be taken to get to the next goal. The choise for a 2D grid instead of a node based architecture was to be more flexible in path creation and the tuning of it. It allowed to specify various zones around objects and walls to create a safety zone without having to place them manually in the JSON map file and specify the connections between the nodes. An additional advantage of the grid was the navigation around dynamic objects, when an (dynamic) object was obstructing the current path a new path could be constructed which seamlessly transitioned with the old path, resulting in smooth movement behavious. This was harder to accomplish with a node based structure, since it could be that a node closeby would interrupt the movement which would slow PICO down. How the data from the 2D grid is handled will be explained in more detail in the strategy section. The content of the grid started with integer values corresponding to the type of data which was stored in the cell belonging on the world position, such as: walls, corners and sensed objects. Further on in the project it became clear that the only thing needed within the strategy component was if the world position belonging to that specific cell was reachable or unreachable by PICO. To reduce the complexity of the grid the datatype was changed to a Boolean value. This meant that cells who store a 1 (true) that its world location was accessible and a 0 (false) that the world position was unreachable. Changing the datatype of the grid had no effect on the workings of the A* pathfinding, the grid was used to see where PICO could go and where it should stay away from, which is in fact just a Boolean value.  
* The second method for detecting the corridor is based on convexity of the corner points. When a high-low-high sequence in distance is found between three successive corner points, then the middle corner is one of the corners at the entrance of the corridor. This is visualized in Figure 2. Unfortunately due to time constraints it was not possible to successfully implement this before the escape room challenge.


The first step made to build the 2D grid was to break up the world space which was occupied by the hospital environment in equally large pieces. To achieve this, the maximum horizontal and vertical position are saved while loading the corner data from the map file. Then, the grid dimensions were set to the length divided by the grid resolution. In order to get the best grid result, the resolution is determined based on the given data in the JSON file. Since all corner positions are rounded to one decimal place, except for 6 corner positions with more decimals, the courser resolution could be set to 0.1 meters without loss of accuracy. This resolution would result in a grid of 67x130 nodes for the provided map which has 6.7x13 meter as surface dimension. Lowering the resolution results in a higher grid density but will increase the processing time required by the pathfinding algorithm. In Figure 11a the initial grid is shown where all the grid data is set to accessible.
[[File:beun_escape_room.png|400px|thumb|center|Figure 1: Illustration of the difference searching algorithm.]]
[[File:HLH_sequence_v2.png|400px|thumb|center|Figure 2: Illustration of the HLH searching algorithm.]]


The next step was filling the grid with the loaded data stored in the Line2D objects from before. Since these objects stored the beginning and ending of all the walls and side cabinets it was possible to loop over each object, dividing the line between it into smaller segments who could be matched to the grid and setting the corresponding grid cell at the belonging world position to unreachable (false) in the grid. This eventually led to a grid as seen in Figure 11b. By giving different offsets to specified Line2D objects it was possible to create some space for error within the movement and path planning, such that PICO will not move directly next to obstructions such as the walls and cabinets. This effect can be seen in Figure 11c.  
==== Less robust method ====
This algorithm works as follows. At first the robot will search for large differences between two subsequent points from the LRF. The program continuously look trough the data and if the range of two consecutive data points have a larger difference than 0.5 meter the program will assume that the corridor is found. This is illustrated in Figure 1. If initially no corridor is found the robot will rotate such that it has seen the entire view of 360 degrees. It could be the case that still no corridor is found because the room is too big. Then the robot will just drive forward and stop just before hitting a wall. It will then drive a bit backwards and search again for the corridor, this process will repeat until the corridor is found.


To let PICO react to sensed obstacles within the hospital environment the grid was considered as a living object, which meant that when an object got pushed to the worldmodel that the grid cells corresponding to the pushed object location got put into the grid as unreachable. Just like for the walls and cabinets an offset could be specified which would be filled in the grid around the locations where obstacles were detected. The combined result can be seen in Figure 11d.
When the corridor is found the robot will rotate until the corridor is in front of it. The robot will now drive towards the detected corner point of the corridor. When the robot has reached the corner, it will drive a bit sideways until it has enough space to move forward. Now the assumption can be made with great certainty that the robot is located in the corridor. To drive out of the corridor the following steps are be taken by the algorithm:


<gallery widths=150px heights=250px perrow=5>
* If the robot almost hits a wall it will check which of the ranges depicted in Figure 3 is the furthest.
Image:Group1_wm_empty_map.PNG|Figure 11a: Empty 2D grid
Image:Group1_mw_load_map.PNG|Figure 11b: Loaded JSON data parsed
Image:Group1_wm_offset_map.PNG|Figure 11c: Added offset
Image:Group1_wm_obstacle_map.PNG|Figure 11d: Sensed object parsed
</gallery>


=== Strategy ===
* The robot will now drive in the positive x-direction, positive y-direction, or negative y-direction depending on which of the ranges in step 1 is the furthest.
The purpose of the strategy class is to compute and return the path that PICO needs to take in order to complete its tasks. This path should be a vector of 2D points in the global reference frame. These waypoints are determined in two steps.


First, the shortest path from PICO to its destination is computed using an A* shortest path algorithm [https://en.wikipedia.org/wiki/A*_search_algorithm| [7]]. This algorithm is used on the grid nodes of the world map which is shown in Figure 12. A* is a similar algorithm compared to Dijkstra’s shortest path algorithm. However, due to the implemented heuristic A* is much faster [https://www.youtube.com/watch?v=g024lzsknDo|[8]]. Figure 13 shows a path found by the A* algorithm.  
* This process keeps repeating, which results the robot to be driving out of the corridor in a zigzag like pattern.


After the A* algorithm has found a shortest path, a large vector is obtained with all grid nodes that need to be visited. There is however no need to reach all these points one by one, in fact, using this as a reference trajectory will only slow PICO down due to it correcting for overshoot. For this reason, a string pulling algorithm was used to only keep the waypoints that are strictly necessary in order to not hit any obstacles or walls. This string pulling algorithm works as follows.  
* In every of these states the robot will check if the finish line is reached. This is done by counting the amount of infinite values in the laser range data, once the counter surpasses a set value the end of the corridor is reached and thus the finish line is passed.


It starts evaluating if it can reach the second node from the first node in a straight line without crossing an unreachable space. If so, then it continues evaluating if the third node can be reached and so on. If it stumbles on a node that cannot be reached in a straight line, in other words, it would cross an unreachable grid node on the straight line between the first node and the node that it is being evaluated, then it will save one node prior to the unreachable node and stores this node as a waypoint. The algorithm is then repeated with the saved waypoint as the new node from which the path is evaluated. This process continues until the final waypoint is reached. This results in a vector only containing the waypoints that are strictly necessary for it to reach the target without hitting an obstacle or wall, see Figure 14.
[[File:beun_escape_room_corridor.png|300px|thumb|center|Figure 3: Range checking inside the corridor.]]


<center>
==== Results ====
<gallery mode=packed heights=300px widths=290px>
Unfortunately the robust method could not be finished on time, however the less robust method was tested during the escape room challenge. The algorithm was able to pass the finish line within 20 seconds, this was enough for the second time of all the groups.
File:gridgroup12020.png| Figure 12: Visualization of the grid
File:waypointsgroup12020.png| Figure 13: Path generated using A* algorithm
File:waypointsnewgroup12020.png| Figure 14: Waypoints with string pulling algorithm
</gallery>
</center>

Latest revision as of 16:32, 13 May 2021


Group Members

Luuk Verstegen - 1252488

Bob Bindels - 1246348

Stijn van den Broek - 1252011

Lars van Dooren - 1249169

Tjeerd Ickenroth - 1232296


Introduction

Design Document

A Design Document is created in order to elaborate the approach of the Escape Room challenge (also some Hospital challenge aspects). The document includes several topics such as the requirement, components, specifications, functions and interfaces. Download the file here: Design Document


Escape Room Challenge

The task of the escape room challenge is to find the corridor and exit the room through it. Since our group did not have much experience with C++ programming, it was decided to split up the work into a robust method (2 people) and a less robust method (3 people):

  • Robust: the first method is based on line segments and corner detection. From the LRF data the angle indices corresponding to start- and endpoints of line segments can be found. From these indices and the corresponding distances the location of the corridor entrance can be determined. The robot should drive to this position and from there on align itself with the walls of the corridor and drive parallel to the walls until the finish is reached. Since not a lot from the Wiki example can be used here, it was expected that this method would take the most time to get it properly working.
  • Less robust: the second method is based on the full example which has been placed on the Wiki. Here, wall detection, drive control and a finite state machine are already implemented and therefore it is relatively easy to adapt. The method for finding the corridor in this method is a relatively easy one: when a sudden distance jump is found between one LRF index and the next, then the corridor is found. To make sure that once the robot has found this sudden distance jump and drives towards the corridor, mostly changes in the finite state machine are required.

Robust method

For the first method, line segmenting is implemented. This is done as follows:

1. First, a line is drawn between the start point with index 0 and the end point with index 999.

2. Then, for each LRF data point, the distance perpendicular to the line drawn in step 1 is determined and both index and distance of the point corresponding to the largest distance are saved.

3. If the distance lies above a certain threshold value, then the point found in step 2 does not lie on the line drawn between points 0 and 999. In that case, the process is repeated from step 1 again, however now the end point is changed to the index determined in step 2. If the distance lies below the threshold value, then there is a line detected between the start and end point stated in step 1. The process is then repeated with the end point of the current iteration now being the start point of the next iteration.

4. The steps above are repeated until the start point index is equal to 999, which means that all line segments have been found.

Note that since all the start and end points of the line segments are saved, also the corner indices are known. A problem which had to be dealt with using this method is the fact that when the robot is driving in the open-ended corridor, certain distances in the LRF data are close to zero since the laser beams sent outside the corridor are not reflected by a wall or object. This resulted in the robot detecting the indices corresponding to angles pointed outside the corridor as corners, which obviously is not the case. Therefore, at each instance the robot receives new LRF data, the indices which correspond to a distance smaller than 0.1 m are stored and later taken out of the array of corner indices. The result is that only the indices of the actual corners remain.

Another problem, which was not solved at the time of the challenge, is the fact that not between all subsequent corner indices a line should be drawn. For example, when the robot is driving straight through the corridor it detects the ends of the left and right wall as corners, however in between these corners there is no wall. This distinction should be made such that the robot knows where a wall is located and where not, such that he can align to the walls in the corridor. At the time of the challenge it was not yet figured out how this should be done.

For the detection of the corridor two methods where thought of:

  • The first method for detecting the corridor is based on a jump in distance between subsequent corner indices which are within a certain index bound w.r.t. each other, this is visualized in Figure 1. When this jump is detected, the robot should rotate and drive towards the corner with the smallest distance and from there on perform a similar kind of movement as explained in the less robust method. When detecting the corridor in this way the main difference between the robust and less robust method is that the robust method only checks for a distance jump between the corner indices instead of all indices, thus saving computation time. A disadvantage of this method is that the robot cannot detect the corridor when it is right in front of the corridor, since then the index gap between the left and right corner is out of the specified bound.
  • The second method for detecting the corridor is based on convexity of the corner points. When a high-low-high sequence in distance is found between three successive corner points, then the middle corner is one of the corners at the entrance of the corridor. This is visualized in Figure 2. Unfortunately due to time constraints it was not possible to successfully implement this before the escape room challenge.
Figure 1: Illustration of the difference searching algorithm.
Figure 2: Illustration of the HLH searching algorithm.

Less robust method

This algorithm works as follows. At first the robot will search for large differences between two subsequent points from the LRF. The program continuously look trough the data and if the range of two consecutive data points have a larger difference than 0.5 meter the program will assume that the corridor is found. This is illustrated in Figure 1. If initially no corridor is found the robot will rotate such that it has seen the entire view of 360 degrees. It could be the case that still no corridor is found because the room is too big. Then the robot will just drive forward and stop just before hitting a wall. It will then drive a bit backwards and search again for the corridor, this process will repeat until the corridor is found.

When the corridor is found the robot will rotate until the corridor is in front of it. The robot will now drive towards the detected corner point of the corridor. When the robot has reached the corner, it will drive a bit sideways until it has enough space to move forward. Now the assumption can be made with great certainty that the robot is located in the corridor. To drive out of the corridor the following steps are be taken by the algorithm:

  • If the robot almost hits a wall it will check which of the ranges depicted in Figure 3 is the furthest.
  • The robot will now drive in the positive x-direction, positive y-direction, or negative y-direction depending on which of the ranges in step 1 is the furthest.
  • This process keeps repeating, which results the robot to be driving out of the corridor in a zigzag like pattern.
  • In every of these states the robot will check if the finish line is reached. This is done by counting the amount of infinite values in the laser range data, once the counter surpasses a set value the end of the corridor is reached and thus the finish line is passed.
Figure 3: Range checking inside the corridor.

Results

Unfortunately the robust method could not be finished on time, however the less robust method was tested during the escape room challenge. The algorithm was able to pass the finish line within 20 seconds, this was enough for the second time of all the groups.