https://cstwiki.wtb.tue.nl/api.php?action=feedcontributions&user=S136625&feedformat=atomControl Systems Technology Group - User contributions [en]2024-03-28T23:15:03ZUser contributionsMediaWiki 1.39.5https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=78387Embedded Motion Control 2019 Group 32019-06-19T13:16:25Z<p>S136625: /* Odometry data */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge. Below a small clip of the escape room challenge.<br />
<br />
[[File:EscapeRoom.gif|center|alt=Clip of group 3 at the escape room|frame|Clip of group 3 at the escape room.]]<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|State chart Escape room challenge|center|thumb|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png|frame|center|Example path point map]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
All sections of the final pico program were operational and working in time for the final challenge. These together create a functioning solving plan as we had envisaged. This final program was tested in different simulation environments matching the final challenge. In the first simulation, a copy of the map of the final challenge was used, but without any static or dynamic object, as wel as no closed doors. In this simulation everything was working very well, however, there was a chance the pico robot could lose its position when it was doing its cabinet procedure. At this moment the pico robot gets very close to the cabinet, and as a result, it sees a lot less of the room, which can cause the position estimation to trip up. Other simulation tests used the same maps, but with added static obstacles and closed doors. In these tests it was noticed that the pico robot seemed to have more problems. The problems mostly arise from the pico robot losing its position due to matching the corners of static objects with points on the map. Doors also gave the added problem that a closed door may obscure corners that should actually be there. In general, we can determine that if the amount of visible corners gets to low, or there are too many wrong corners, the pico robot will lose its position. At that moment it will try to fix this. This will work most of the time, but sometimes it will not be able to find the correct position again, or will fix it in a completely wrong way. When that happens, there is no way to fix it anymore. By running these simulation tests a lot of time, we determined that we had a 40% chance of completing the final challenge. <br />
<br />
During the final challenge, the pico robot had to visit the cabinets in order 0, 1, 3. The door was closed between 0 and 1, so the pico robot would have to find a alternative route between 0 and 1. Furthermore, there were some static obstacles, especially a big one in the hall way and in room 1, but less than we had anticipated. There was also one dynamic object, with which we had not run any tests beforehand, so we were not sure how the pico robot would react to that. Each group had two runs it could do.<br />
<br />
The first leg of the challenge went very well, the pico robot first had to determine its orientation, which it was able to do excellent during both runs.<br />
<br />
[[File:orientation in start.gif|center|frame|Hospital challenge - Finding initial orientation]]<br />
<br />
It will then go from the starting area to the hall way, from which it will go to room 2 and then end in room 0, as this route is the shortest route. Indeed, this is exactly what the robot did. It went from waypoint to waypoint on the map, as we had defined it. It did so in a smooth manner, indicating that there were no issues with localization at this moment. <br />
<br />
[[File:from start to room.gif|center|frame|Hospital challenge - Going from point to point]]<br />
<br />
When it arrived in room 0, it drove up to the correct side of cabinet 0, turned in the correct way, and drove up to the cabinet. In the first run, it did this in a correct manner, but in the second run, it did not drove close enough to the cabinet, and the jury was not sure if cabinet was correctly reached. <br />
<br />
[[File:cabinet procedure.gif|center|frame|Hospital challenge - Cabinet stuff]]<br />
<br />
Next, the pico robot had to go to cabinet 1. Normally the fastest route would be going from room 0 to room 1, but the door between them was closed. So the pico robot had to drive to the door to notice this. At this moment the first problems with localization arose, in both runs. As we had noticed in the tests, the pico robot had difficulty estimating its position when very close to the cabinet, and while the pico robot was moving from the cabinet to the door, it first went the wrong way, towards the wall. It did this in both runs. In run 1, it merely scraped the wall, but in test 2, it bumped quite hard into the wall. The potential field should have stopped the pico robot from bumping into the wall, even though it lost its position, but this was not able to prevent it. In both runs however, the pico robot was able to fix its position eventually. <br />
<br />
It then drove up to the door, waited for a while, and then correctly determined that this door was closed. It was able to do this correctly in run 1.<br />
<br />
[[File:alternative route.gif|center|frame|Hospital challenge - Finding an alternative route]]<br />
<br />
In run 2 the pico robot again lost its position at this instant, and again drove straight into the wall, knocking the wall completely out of place. This meant the end for the second run. In the first run however, it was able to keep its position, and go to the hallway again.<br />
<br />
[[File:going to hallway.gif|center|frame|Hospital challenge - Going to next cabinet]]<br />
<br />
In the hallway it had to go from one end to the other, with one big obstacle in the way, and a person walking around the hallway. This proved to be too much for pico, as it again seemed to lose its position. It tried to fix this, but it was not able to localize correctly anymore, which meant the end of the first run.<br />
<br />
[[File:losing its position.gif|center|frame|Hospital challenge - Losing postion]]<br />
<br />
In both runs, the pico robot was able to find the first cabinet, complete the procedure there, and was able to determine another route because of a closed door. However, going from cabinet 0 to cabinet 1 proved too difficult for the robot, which mainly has to do with localization issues. This was something that we had anticipated, but we are very happy that the pico was able to show a correct first part in both runs. Localization seemed to be the biggest and most difficult issues to tackle, so more time could have been spent on this aspect of the program.<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|thumb|1000px|center|System architecture of robot]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
==== LRF data conditioning ====<br />
A test measurement with the robot was done to obtain raw LRF data. Analysis of this data concluded that the data contained unwanted points. These point came in two categories. <br />
The first category was of points which were directly on the robot. These points may be caused by dirt on the LRF sensor. These points were filtered by removing al data points within a certain radius of the robot. The size of this radius was chosen to be 0.25m.<br />
The second category were unwanted points which were on the edges of the field of view of the LRF. The LRF measured part of the exterior of the robot. These points were filtered by removing the first and last 10 points from the LRF data.<br />
After the data is filtered, the data is converted from polar coordinates to Cartesian coordinates. This conditioned data is then send to the detection block.<br />
<br />
==== Odometry data ====<br />
The odometry data is retrieved from the robot and stored in a variable that is publicly accessible by other objects. The odometry data can only be read by one function at a time, which is circumvented by using the public variable.<br />
<br />
=== Detection ===<br />
<br />
In detection, the conditioned data of the perception block is used to create a map of the surroundings of the robot. This is then send to the worldmodel to localize the robot. This chapter explains how the conditioned LRF data is converted to a map of the walls and corners of the robots surroundings.<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px|center|thumb|Modified Json map with path points]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
In the current design of the point map, there is a possibility that some points can not be reached because of an obstacle that is on that point. For example, if there is a obstacle on point 8, the path from point 7 to point 8 would be impossible to perform. This would mean that the whole left side of the map can not be accessed by the robot, since driving from point 7 to point 8 is required to get there. To solve this problem, a nunber of "backup paths" were added to the point map. These are paths between point that were not initially connected. The path are diffined such that the robot will only choose it if there are no other options to go to a certain point. The backup paths added are: <br />
<br />
{| class="TablePager" style="width: 100px; min-width: 110px; margin-left: 2em; float:left; color: black;"<br />
|-<br />
! scope="col" | '''Backup paths'''<br />
|-<br />
| 5->7<br />
|-<br />
| 7->9<br />
|-<br />
| 8->18<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# Lines are fitted from the segment points using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=Split and merge procedure|Split and merge procedure.|frame]]<br />
<br />
The code snippet of the split and merge function: * [https://gitlab.tue.nl/EMC2019/group3/snippets/137]<br />
<br />
As mentioned earlier, each segment is fitted to a line using the RANSAC algorithm. RANSAC (RANdom SAmple Consensus) iterates over various random selections of two points and determines the distance of every other point to the line that is constructed by these two points. If the distance of a point falls within a threshold distance, this point is considered an ''inlier''. The distance ''d'' of this inlier to the line is then compared to the threshold value ''t'' to determine how well it fits the current line iteration. This is described by the score for this point, which is calculated as ''(t - d)/t''. The sum of all scores for one line iteration is then divided by the number of points in the segment that is being evaluated. This value is the final score of the current line iteration. By iterating over various random lines among the points in the segment, the line with the highest score can be selected as being the best fit. The image below demonstrates the basic principle of an unweighted RANSAC implementation, where only the number of inliers accounts for the score of each line.<br />
<br />
[[File:RANSAC_EMC3_2019_.gif|center|alt=Unweighted RANSAC line fitting visualisation|Unweighted RANSAC line fitting visualisation.|frame]]<br />
<br />
The reason that the RANSAC algorithm was selected for fitting the lines in the segments over linear fitting methods, such as least squares, is robustness. During initial testing it became clear that when the laser rangefinder scans across a dead angle, it would detect points in this area that are not actually on the map. These points should not be taken into account when fitting the line. As visualised above, these outliers are not taken into account by RANSAC. If a linear fitting algorithm such as least squares were to be used, these outliers would skew the actual line, resulting in inaccurate line detection.<br />
<br />
A final line correction needs to be done because the RANSAC function implementation only returns start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
The code snippet of the RANSAC function: * [https://gitlab.tue.nl/EMC2019/group3/snippets/136]<br />
<br />
=== Monitor block ===<br />
The function of the monitor block is to keep track of the state of the software, as well as command the state changes. The block also processes the interaction between the robot and the outside world. This includes the text to speak function, saving the cabinet snapshots and the input of cabinets.<br />
<br />
==== State chart ====<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO <---- ??<br />
<br />
The state chart will be a part of the "World model block" form the system architecture. <---- ??<br />
<br />
The state chart describes the steps the software needs to take in order to perform the final challenge. Each state describes an action the software needs to perform. Ones this action is completed the software will flow to the next state. At state with multiple output arrows, a decision needs to be made to which state the software will flow. This decision is always an 'if' statement. During the action that is performed in a state, the decision of which state the software flows to is made. The figure below shows the state chart.<br />
<br />
[[File:State machine final.png|800px|center|State chart|thumb]]<br />
<br />
The state chart starts at the red dot. The first state is for inputting the cabinet order. The state was bypassed however, since the method of inputting the cabinet order was later defined in the assignment. The next state is for declaring variables used for the state chart. The state "Check whether at starting point" and the states to its right are for positioning the robot on the start point. Since, at the start of the challenge, the exact position of the robot is unknown, the first task for the robot is to position itself. The state will use localizing to see where it is compared to a defined start point. This start point is a point of the point map, which was earlier discussed in the "Path planning". <br />
The movement of the robot is split into two states. One is for rotating the robot towards the next point. The second is driving the robot towards the next point. The splitting of movement into two separate states was done to simplify the movement and t reduce the chance of collision. During every movement state, the "potential field' is turned on as to avoid collisions. This is further explained in the chapter "Drivecontrol".<br />
In the "Set point to visit" state, the next cabinet that needs to be visited. If there are no more cabinets left to visit, the state chart goes to the "Finished" state. The software than calculates the shortest route from its current point to the cabinet. The next states are for moving the robot from point to point until it reaches a cabinet. If a path is blocked, the software will update the point map by removing that path between the points. The software will than return to the "Set point to visit" state and recalculate the route.<br />
<br />
The state chart is implemented in the software with two functions. The first function starts the tasks that need to be performed in the function. The second function checks whether all the tasks of that state are completed. Ones all the tasks are completed, the software will flow to the next state. Using two functions allows for other parts of the software to continue parallel to the state chart.<br />
<br />
=== World model block ===<br />
The world model is the central object in the software architecture. The purpose of the world model is to act on changes in the positioning of PICO, such as sensing where on the map PICO is, determining to which pathpoints to drive, and in which direction to drive in order to reach a pathpoint. An additional responsibility of the world model is to visualise the conditioned LRF data and the pathpoint to which PICO is driving.<br />
<br />
==== Position estimation ====<br />
Using the wall finding algorithm as described earlier, we are able to extract the relative position in cartesian coordinates of all the visible corners. By matching these corners to the known corners from the json file we are given, it is possible to determine the location of the PICO robot. This is done in one big function containing several steps. This function is constantly running in the background, and determines the absolute position of the PICO robot, with the origin in this frame being determined as the zero point in the json file.<br />
<br />
The first step in the localization function is extracting all corner positions from the json file, as well as getting all visible corner positions from the wall finding algorithm function. Ofcourse, these two kind of positions will be given in a different frame, and as such a conversion needs to be made from one frame to another. So the coordinates of the corners given to us from the wall finding algorithm are converted from a relative frame, to an absolute frame, in order to make it similar to the json file coordinates. This may seem as a catch-22 situation, as converting these coordinates from a relative to a absolute frame requires the position and orientation of the PICO robot, which is exactly what we are trying to determine. So the conversion is made using the last correctly found absolute coordinates of the pico robot. We know this function is ran not too long ago, and as such, we know these last known absolute coordinates cannot be off by a lot. <br />
<br />
We can divide the rest of the function in two steps, first it will determine its orientation, next is will determine its position. For orientation finding, the relative to absolute conversion is made a lot of times using ‘the last known orientation-0.3’ rad to ‘last known orientation+0.3’ rad, with steps of 0.01 rad. We get a list of visible corner positions in absolute coordinates found using a lot of different orientations. This list is compared to corner coordinates from the json file. Now, you would think that the next step is to compare which set of found corners will match the json file the best, but this is not the case. The next step instead is to see which set of corners has the least amount of variance in error when compared to the json file. This method is used because the real absolute position will be different to the absolute position used in the frame conversion, so there will always be an error. We know however that this error should be the same for all corners when the orientation is correct, and as such, the variance should be as low as possible. So all sets of corners are compared to the json file, and the ones with the lowest error variance give us the correct orientation.<br />
<br />
So, orientation is now known, and position can now be determined. This can be done by going through all found points, look at the found absolute position, and see which coordinate from the json file is the closest one. This works since we know the real absolute position cannot be to far off from our last known absolute position. When we have done this for every found corner, we compare all the errors between the found corners and their best matching json coordinate. What we will likely found is that most of these errors will be roughly the same, with maybe same errors being totally different. These completely different errors are thrown out, and only the matching errors are kept. The mean of these errors will be calculated, and the resulting number will be the mismatch between the last known pico coordinates, and the current real pico coordinates. The new pico coordinates can then be saved.<br />
<br />
The function will check if its new position is a realistic result. It does this by comparing it to its previous known position. Since this previous known position can not be off by too much, we know that when there is a major difference between the new and old position, that something must be wrong. If this is determined, the function will not save the new position, and instead keep the last correct position as its current position. The function will simply rerun with new sensordata to make a new estimation and hopefully, that one will be better. If the function discards the new position several times in a row, it will change some of its numbers in the function, as to increase the range at which it will search for its correct coordinates. For example, the range of orientation finding will increase from "-0.3 to 0.3" towards "-pi to pi". This wider range numbers are also used when the function is ran for the first time, as during initialization the correct position is only known roughly, and nothing is known about orientation.<br />
<br />
The code snippet of the localisation function: * [https://gitlab.tue.nl/EMC2019/group3/snippets/138]<br />
<br />
==== Pathpoint route calculation ====<br />
In the paragraph "Path planning", the method the robot uses to navigate through the hospital was explained. The robot will use a point map, where each point is connected to a different neighboring point. To get from a point to a non-neighboring point, the robots needs to travel from point to point to get there. This list of points is called the route the robots needs to travel. The pathway between points is referred to as the path. Often, there are multiple routes from a point to another point. In this case, the shortest route is preferred, since the chance of the robot losing its position on the map is the smallest.<br />
<br />
In order to obtain the shortest route, the Dijkstra algorithm is used. This algorithm can be used to obtain the shortest path from a point to another point, given a point map and the weight of the paths between the points. This weight can be based on the distance between the points or the difficulty of the path. Dijkstra's algorithm works by creating a list of the distance from a start point to every other point. The algorithm then checks the distance to each other point, whilst updating the list of shortest distances. The algorithm also remembers the previous point from which the shortest route came. So once the shortest distance list is completed, the algorithm can backtrack the route from the end point to the start point.<br />
<br />
The figure below illustrates how the algorithm works. Inside the points is the current shortest path to the point, while the number near the path is the weight of that path. This image is from steemit.com [https://steemit.com/popularscience/@krishtopa/dijkstra-s-algorithm-of-finding-optimal-paths].<br />
<br />
[[File:Dijkstra_EMC3_2019.gif|center|Visualization Dijkstra algorithm|frame]]<br />
<br />
In the software, the point map is represented as a matrix. This matrix contains the distance from each point to each other point. The figure below show a matrix. In this matrix, point d[1,n] is the weight from point 1 to point n. Value d[n,1] will be the same, as this is the distance from point n to point 1. If, for example, point 1 and point two would not be connected, the value in the matrix would be d[1,2] = d[2,1] = 0.<br />
The weight of the path is represented by an number bigger than 0. The larger the number, the more difficult the path. The diagonal of the matrix contains the distances from each point to itself. This value should always be zero (d[n,n] = 0).<br />
<br />
<br />
[[File:Pointmap_matrix.PNG|200px|center|thumb|Point map matrix]]<br />
<br />
Based on the mentioned properties of the point matrix, it can be concluded that this matrix should always be symmetric and always have only zeroes in the diagonal. these facts can be used when to check whether the inputted map is correct and does not contain errors. Furthermore, editing the point matrix is simple. If a path between point n and point m needs to be added or removed, all that needs to be done is change the values such that d[m,n] = d[n,m] = w, where w is the new weight of that path. if a path needs to be removed, w should be equal to 0.<br />
<br />
==== Target-based driving ====<br />
Now that both absolute positions of PICO and the next pathpoint are known on the map, the robot needs to know in which direction to drive. That means that the location of the pathpoint that PICO should drive to, needs to be transformed to the coordinate space of PICO. This way it can be figured out in which relative direction PICO should drive and how much PICO should rotate before driving. The results of the calculations is visualised in the image below.<br />
<br />
[[File:Coordinate_transform_EMC3_2019.png|center|Diagram of coordinate space transformation parameters|500px|thumb]]<br />
<br />
The first step is to determine the difference vector [[File:V_delta_EMC3_2019.png|frameless|upright=0.1]] between the position of the pathpoint [[File:V_point_EMC3_2019.png|frameless|upright=0.2]] and the position of PICO [[File:V_pico_EMC3_2019.png|frameless|upright=0.2]]. This is done by a simple subtraction:<br />
<br />
[[File:V_delta_calc_EMC3_2019.png|center|frameless|upright=0.75]]<br />
<br />
Then the coordinate space matrix of PICO [[File:S_pico_EMC3_2019.png|frameless|upright=0.2]] needs to be determined in order to create the transition matrix. This is done by using the absolute rotation of PICO [[File:Th_pico_EMC3_2019.png|frameless|upright=0.2]] in order to calculate the unit vector components of the x- and y-axes of the PICO coordinate space within the absolute coordinate space.<br />
<br />
[[File:S_pico_calc_EMC3_2019.png|center|frameless|upright=1.5]]<br />
<br />
Then the relative position vector of the pathpoint in PICO's coordinate space [[File:V_point_pico_EMC3_2019.png|frameless|upright=0.2]] can be calculated by multiplying the inverse of the PICO space matrix with the difference vector:<br />
<br />
[[File:V_point_pico_calc_EMC3_2019.png|center|frameless|upright=0.75]]<br />
<br />
With the relative coordinates known of the next pathpoint, PICO knows how far to turn in a certain direction before driving and where to aim for when avoiding obstacles.<br />
<br />
==== Visualisation ====<br />
Several visualisation functions were made for the sake of debugging the trajectory of PICO. These functions are built upon the OpenCV framework and are meant to streamline the process of drawing the LRF data, the detected walls, targeted pathpoints, and PICO itself. These functions require a Mat object as an input (pass by reference) and draw their actions on there. This way, all the data can be passed into these functions using world units without the need for converting everything to pixel positions.<br />
<br />
By stacking all the mentioned functions on a single Mat object, the main visualiser of the program was created. A gif of the visualiser in a simulated environment is displayed below. The blue point represents the active target of PICO at any given time. The walls are drawn in white on top of the green LRF data.<br />
<br />
[[File:Visualisation_EMC3_2019.gif|center|Visualisation of the final software|frame]]<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px|center|thumb|Potential field principle]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px|center|thumb|Potential field practical example]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the position data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Test the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Results==<br />
The results from each tests are described in separate parts.<br />
<br />
===Laser Range Finder & Encoders===<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user. However, the actual range is much larger. During the test, laserdata appeared within the 10cm radius of the robot. This data produced false positives and had to be filtered out. The maximum range of the range finder was actually larger than 10 meters, but this does not limit the functionality of the robot. The other properties of the laser range finder were accurate.<br />
<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. Due to the three wheel base orientation of the actuators, the ''x''- and ''y''-direction are less accurate than the rotation.<br />
<br />
===Static Friction===<br />
The actuators have a significant amount of static friction. The exact amount of friction in both translational and the rotational direction was difficult to determine. An attempt was made by slightly increasing the input velocity for a certain direction until the robot began to move. This differed for each test, so the average was taken as a final value for the Drivecontrol. It is important to note that the friction was significantly less for the rotational direction than the translational directions. The ''y''-direction had the most friction and also had the urge to rotate instead of moving in a straight line.<br />
<br />
===Laserdata===<br />
Due to the limited testing moments, some laserdata was recorded in different situations. This data could be used to test the spatial recognition functions outside the testing moments. Data was recorded in different orientations and also during movement of the robot.<br />
<br />
===Drivecontrol Test===<br />
This test was executed to determine if the smoothness and accuracy of the Drivecontrol functions were sufficient or if the acceleration would have to be reduced. Both the smoothness and the accuracy were satisfactory, especially for the rotational movement.<br />
<br />
The potential field was also tested by walking towards the robot when it was driving forward. It successfully evaded the person in all directions while continuously driving forward.<br />
<br />
===Full System Test===<br />
The full system test was executed on the provided example map. However, during this test, no dynamic or static obstacles were present that were not on the map. PICO was able to find the cabinets in the correct order from different starting positions and orientations, but the limited space between the two cabinets provided some difficulties. Since PICO is supposed to return to his previous navigation point if the orientation in front of a cabinet failed, and this point was in between the two cabinets this caused some issues. However, in the Hospital challenge map, no two navigation points were placed as close together as in the example map, which should solve this issue.<br />
<br />
= Conclusion & Recommendations =<br />
In conclusion, the software implementation of the design described in this Wiki is capable of fulfilling the basic functionality of the hospital challenge. That is, if the hospital environment only contains the necessary wall geometry and the cabinets. The addition of static and dynamic obstacles proved difficult to handle by the position estimation code, and ultimately led to the robot miscalculating the orientation at the end of the hospital challenge.<br />
<br />
It is recommended to dedicate the last week before the challenge to testing all the integrated code. This is because the software described in this Wiki was only ever fully implemented during the challenge itself, and the execution proved that the robot was still susceptible to disturbances in the environment in the form of dynamic obstacles. This could have been prevented by fine-tuning several variables in the code. Secondly, it is recommended to dedicate the most time and resources to the position estimation, as this is a crucial element that is relied upon for decision-making in the state machine and calculating the movement trajectory. The potential field implementation, however, proved very robust and simple to implement. It is therefore recommended to other groups to implement this to avoid collision with static and dynamic obstacles.<br />
<br />
[[File:Pico3th.png|thumb|center|400px|alt=Pico3th|Proud to be in 3rd place [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#The_day_of_the_challenge]]]<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Useful information ==<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=78385Embedded Motion Control 2019 Group 32019-06-19T13:13:36Z<p>S136625: /* Odometry data */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge. Below a small clip of the escape room challenge.<br />
<br />
[[File:EscapeRoom.gif|center|alt=Clip of group 3 at the escape room|frame|Clip of group 3 at the escape room.]]<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|State chart Escape room challenge|center|thumb|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png|frame|center|Example path point map]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
All sections of the final pico program were operational and working in time for the final challenge. These together create a functioning solving plan as we had envisaged. This final program was tested in different simulation environments matching the final challenge. In the first simulation, a copy of the map of the final challenge was used, but without any static or dynamic object, as wel as no closed doors. In this simulation everything was working very well, however, there was a chance the pico robot could lose its position when it was doing its cabinet procedure. At this moment the pico robot gets very close to the cabinet, and as a result, it sees a lot less of the room, which can cause the position estimation to trip up. Other simulation tests used the same maps, but with added static obstacles and closed doors. In these tests it was noticed that the pico robot seemed to have more problems. The problems mostly arise from the pico robot losing its position due to matching the corners of static objects with points on the map. Doors also gave the added problem that a closed door may obscure corners that should actually be there. In general, we can determine that if the amount of visible corners gets to low, or there are too many wrong corners, the pico robot will lose its position. At that moment it will try to fix this. This will work most of the time, but sometimes it will not be able to find the correct position again, or will fix it in a completely wrong way. When that happens, there is no way to fix it anymore. By running these simulation tests a lot of time, we determined that we had a 40% chance of completing the final challenge. <br />
<br />
During the final challenge, the pico robot had to visit the cabinets in order 0, 1, 3. The door was closed between 0 and 1, so the pico robot would have to find a alternative route between 0 and 1. Furthermore, there were some static obstacles, especially a big one in the hall way and in room 1, but less than we had anticipated. There was also one dynamic object, with which we had not run any tests beforehand, so we were not sure how the pico robot would react to that. Each group had two runs it could do.<br />
<br />
The first leg of the challenge went very well, the pico robot first had to determine its orientation, which it was able to do excellent during both runs.<br />
<br />
[[File:orientation in start.gif|center|frame|Hospital challenge - Finding initial orientation]]<br />
<br />
It will then go from the starting area to the hall way, from which it will go to room 2 and then end in room 0, as this route is the shortest route. Indeed, this is exactly what the robot did. It went from waypoint to waypoint on the map, as we had defined it. It did so in a smooth manner, indicating that there were no issues with localization at this moment. <br />
<br />
[[File:from start to room.gif|center|frame|Hospital challenge - Going from point to point]]<br />
<br />
When it arrived in room 0, it drove up to the correct side of cabinet 0, turned in the correct way, and drove up to the cabinet. In the first run, it did this in a correct manner, but in the second run, it did not drove close enough to the cabinet, and the jury was not sure if cabinet was correctly reached. <br />
<br />
[[File:cabinet procedure.gif|center|frame|Hospital challenge - Cabinet stuff]]<br />
<br />
Next, the pico robot had to go to cabinet 1. Normally the fastest route would be going from room 0 to room 1, but the door between them was closed. So the pico robot had to drive to the door to notice this. At this moment the first problems with localization arose, in both runs. As we had noticed in the tests, the pico robot had difficulty estimating its position when very close to the cabinet, and while the pico robot was moving from the cabinet to the door, it first went the wrong way, towards the wall. It did this in both runs. In run 1, it merely scraped the wall, but in test 2, it bumped quite hard into the wall. The potential field should have stopped the pico robot from bumping into the wall, even though it lost its position, but this was not able to prevent it. In both runs however, the pico robot was able to fix its position eventually. <br />
<br />
It then drove up to the door, waited for a while, and then correctly determined that this door was closed. It was able to do this correctly in run 1.<br />
<br />
[[File:alternative route.gif|center|frame|Hospital challenge - Finding an alternative route]]<br />
<br />
In run 2 the pico robot again lost its position at this instant, and again drove straight into the wall, knocking the wall completely out of place. This meant the end for the second run. In the first run however, it was able to keep its position, and go to the hallway again.<br />
<br />
[[File:going to hallway.gif|center|frame|Hospital challenge - Going to next cabinet]]<br />
<br />
In the hallway it had to go from one end to the other, with one big obstacle in the way, and a person walking around the hallway. This proved to be too much for pico, as it again seemed to lose its position. It tried to fix this, but it was not able to localize correctly anymore, which meant the end of the first run.<br />
<br />
[[File:losing its position.gif|center|frame|Hospital challenge - Losing postion]]<br />
<br />
In both runs, the pico robot was able to find the first cabinet, complete the procedure there, and was able to determine another route because of a closed door. However, going from cabinet 0 to cabinet 1 proved too difficult for the robot, which mainly has to do with localization issues. This was something that we had anticipated, but we are very happy that the pico was able to show a correct first part in both runs. Localization seemed to be the biggest and most difficult issues to tackle, so more time could have been spent on this aspect of the program.<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|thumb|1000px|center|System architecture of robot]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
==== LRF data conditioning ====<br />
A test measurement with the robot was done to obtain raw LRF data. Analysis of this data concluded that the data contained unwanted points. These point came in two categories. <br />
The first category was of points which were directly on the robot. These points may be caused by dirt on the LRF sensor. These points were filtered by removing al data points within a certain radius of the robot. The size of this radius was chosen to be 0.25m.<br />
The second category were unwanted points which were on the edges of the field of view of the LRF. The LRF measured part of the exterior of the robot. These points were filtered by removing the first and last 10 points from the LRF data.<br />
After the data is filtered, the data is converted from polar coordinates to Cartesian coordinates. This conditioned data is then send to the detection block.<br />
<br />
==== Odometry data ====<br />
The odometry data is retrieved from the robot and stored in a variable that is publicly accessible by other objects.<br />
<br />
=== Detection ===<br />
<br />
In detection, the conditioned data of the perception block is used to create a map of the surroundings of the robot. This is then send to the worldmodel to localize the robot. This chapter explains how the conditioned LRF data is converted to a map of the walls and corners of the robots surroundings.<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px|center|thumb|Modified Json map with path points]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
In the current design of the point map, there is a possibility that some points can not be reached because of an obstacle that is on that point. For example, if there is a obstacle on point 8, the path from point 7 to point 8 would be impossible to perform. This would mean that the whole left side of the map can not be accessed by the robot, since driving from point 7 to point 8 is required to get there. To solve this problem, a nunber of "backup paths" were added to the point map. These are paths between point that were not initially connected. The path are diffined such that the robot will only choose it if there are no other options to go to a certain point. The backup paths added are: <br />
<br />
{| class="TablePager" style="width: 100px; min-width: 110px; margin-left: 2em; float:left; color: black;"<br />
|-<br />
! scope="col" | '''Backup paths'''<br />
|-<br />
| 5->7<br />
|-<br />
| 7->9<br />
|-<br />
| 8->18<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# Lines are fitted from the segment points using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=Split and merge procedure|Split and merge procedure.|frame]]<br />
<br />
The code snippet of the split and merge function: * [https://gitlab.tue.nl/EMC2019/group3/snippets/137]<br />
<br />
As mentioned earlier, each segment is fitted to a line using the RANSAC algorithm. RANSAC (RANdom SAmple Consensus) iterates over various random selections of two points and determines the distance of every other point to the line that is constructed by these two points. If the distance of a point falls within a threshold distance, this point is considered an ''inlier''. The distance ''d'' of this inlier to the line is then compared to the threshold value ''t'' to determine how well it fits the current line iteration. This is described by the score for this point, which is calculated as ''(t - d)/t''. The sum of all scores for one line iteration is then divided by the number of points in the segment that is being evaluated. This value is the final score of the current line iteration. By iterating over various random lines among the points in the segment, the line with the highest score can be selected as being the best fit. The image below demonstrates the basic principle of an unweighted RANSAC implementation, where only the number of inliers accounts for the score of each line.<br />
<br />
[[File:RANSAC_EMC3_2019_.gif|center|alt=Unweighted RANSAC line fitting visualisation|Unweighted RANSAC line fitting visualisation.|frame]]<br />
<br />
The reason that the RANSAC algorithm was selected for fitting the lines in the segments over linear fitting methods, such as least squares, is robustness. During initial testing it became clear that when the laser rangefinder scans across a dead angle, it would detect points in this area that are not actually on the map. These points should not be taken into account when fitting the line. As visualised above, these outliers are not taken into account by RANSAC. If a linear fitting algorithm such as least squares were to be used, these outliers would skew the actual line, resulting in inaccurate line detection.<br />
<br />
A final line correction needs to be done because the RANSAC function implementation only returns start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
The code snippet of the RANSAC function: * [https://gitlab.tue.nl/EMC2019/group3/snippets/136]<br />
<br />
=== Monitor block ===<br />
The function of the monitor block is to keep track of the state of the software, as well as command the state changes. The block also processes the interaction between the robot and the outside world. This includes the text to speak function, saving the cabinet snapshots and the input of cabinets.<br />
<br />
==== State chart ====<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO <---- ??<br />
<br />
The state chart will be a part of the "World model block" form the system architecture. <---- ??<br />
<br />
The state chart describes the steps the software needs to take in order to perform the final challenge. Each state describes an action the software needs to perform. Ones this action is completed the software will flow to the next state. At state with multiple output arrows, a decision needs to be made to which state the software will flow. This decision is always an 'if' statement. During the action that is performed in a state, the decision of which state the software flows to is made. The figure below shows the state chart.<br />
<br />
[[File:State machine final.png|800px|center|State chart|thumb]]<br />
<br />
The state chart starts at the red dot. The first state is for inputting the cabinet order. The state was bypassed however, since the method of inputting the cabinet order was later defined in the assignment. The next state is for declaring variables used for the state chart. The state "Check whether at starting point" and the states to its right are for positioning the robot on the start point. Since, at the start of the challenge, the exact position of the robot is unknown, the first task for the robot is to position itself. The state will use localizing to see where it is compared to a defined start point. This start point is a point of the point map, which was earlier discussed in the "Path planning". <br />
The movement of the robot is split into two states. One is for rotating the robot towards the next point. The second is driving the robot towards the next point. The splitting of movement into two separate states was done to simplify the movement and t reduce the chance of collision. During every movement state, the "potential field' is turned on as to avoid collisions. This is further explained in the chapter "Drivecontrol".<br />
In the "Set point to visit" state, the next cabinet that needs to be visited. If there are no more cabinets left to visit, the state chart goes to the "Finished" state. The software than calculates the shortest route from its current point to the cabinet. The next states are for moving the robot from point to point until it reaches a cabinet. If a path is blocked, the software will update the point map by removing that path between the points. The software will than return to the "Set point to visit" state and recalculate the route.<br />
<br />
The state chart is implemented in the software with two functions. The first function starts the tasks that need to be performed in the function. The second function checks whether all the tasks of that state are completed. Ones all the tasks are completed, the software will flow to the next state. Using two functions allows for other parts of the software to continue parallel to the state chart.<br />
<br />
=== World model block ===<br />
The world model is the central object in the software architecture. The purpose of the world model is to act on changes in the positioning of PICO, such as sensing where on the map PICO is, determining to which pathpoints to drive, and in which direction to drive in order to reach a pathpoint. An additional responsibility of the world model is to visualise the conditioned LRF data and the pathpoint to which PICO is driving.<br />
<br />
==== Position estimation ====<br />
Using the wall finding algorithm as described earlier, we are able to extract the relative position in cartesian coordinates of all the visible corners. By matching these corners to the known corners from the json file we are given, it is possible to determine the location of the PICO robot. This is done in one big function containing several steps. This function is constantly running in the background, and determines the absolute position of the PICO robot, with the origin in this frame being determined as the zero point in the json file.<br />
<br />
The first step in the localization function is extracting all corner positions from the json file, as well as getting all visible corner positions from the wall finding algorithm function. Ofcourse, these two kind of positions will be given in a different frame, and as such a conversion needs to be made from one frame to another. So the coordinates of the corners given to us from the wall finding algorithm are converted from a relative frame, to an absolute frame, in order to make it similar to the json file coordinates. This may seem as a catch-22 situation, as converting these coordinates from a relative to a absolute frame requires the position and orientation of the PICO robot, which is exactly what we are trying to determine. So the conversion is made using the last correctly found absolute coordinates of the pico robot. We know this function is ran not too long ago, and as such, we know these last known absolute coordinates cannot be off by a lot. <br />
<br />
We can divide the rest of the function in two steps, first it will determine its orientation, next is will determine its position. For orientation finding, the relative to absolute conversion is made a lot of times using ‘the last known orientation-0.3’ rad to ‘last known orientation+0.3’ rad, with steps of 0.01 rad. We get a list of visible corner positions in absolute coordinates found using a lot of different orientations. This list is compared to corner coordinates from the json file. Now, you would think that the next step is to compare which set of found corners will match the json file the best, but this is not the case. The next step instead is to see which set of corners has the least amount of variance in error when compared to the json file. This method is used because the real absolute position will be different to the absolute position used in the frame conversion, so there will always be an error. We know however that this error should be the same for all corners when the orientation is correct, and as such, the variance should be as low as possible. So all sets of corners are compared to the json file, and the ones with the lowest error variance give us the correct orientation.<br />
<br />
So, orientation is now known, and position can now be determined. This can be done by going through all found points, look at the found absolute position, and see which coordinate from the json file is the closest one. This works since we know the real absolute position cannot be to far off from our last known absolute position. When we have done this for every found corner, we compare all the errors between the found corners and their best matching json coordinate. What we will likely found is that most of these errors will be roughly the same, with maybe same errors being totally different. These completely different errors are thrown out, and only the matching errors are kept. The mean of these errors will be calculated, and the resulting number will be the mismatch between the last known pico coordinates, and the current real pico coordinates. The new pico coordinates can then be saved.<br />
<br />
The function will check if its new position is a realistic result. It does this by comparing it to its previous known position. Since this previous known position can not be off by too much, we know that when there is a major difference between the new and old position, that something must be wrong. If this is determined, the function will not save the new position, and instead keep the last correct position as its current position. The function will simply rerun with new sensordata to make a new estimation and hopefully, that one will be better. If the function discards the new position several times in a row, it will change some of its numbers in the function, as to increase the range at which it will search for its correct coordinates. For example, the range of orientation finding will increase from "-0.3 to 0.3" towards "-pi to pi". This wider range numbers are also used when the function is ran for the first time, as during initialization the correct position is only known roughly, and nothing is known about orientation.<br />
<br />
The code snippet of the localisation function: * [https://gitlab.tue.nl/EMC2019/group3/snippets/138]<br />
<br />
==== Pathpoint route calculation ====<br />
In the paragraph "Path planning", the method the robot uses to navigate through the hospital was explained. The robot will use a point map, where each point is connected to a different neighboring point. To get from a point to a non-neighboring point, the robots needs to travel from point to point to get there. This list of points is called the route the robots needs to travel. The pathway between points is referred to as the path. Often, there are multiple routes from a point to another point. In this case, the shortest route is preferred, since the chance of the robot losing its position on the map is the smallest.<br />
<br />
In order to obtain the shortest route, the Dijkstra algorithm is used. This algorithm can be used to obtain the shortest path from a point to another point, given a point map and the weight of the paths between the points. This weight can be based on the distance between the points or the difficulty of the path. Dijkstra's algorithm works by creating a list of the distance from a start point to every other point. The algorithm then checks the distance to each other point, whilst updating the list of shortest distances. The algorithm also remembers the previous point from which the shortest route came. So once the shortest distance list is completed, the algorithm can backtrack the route from the end point to the start point.<br />
<br />
The figure below illustrates how the algorithm works. Inside the points is the current shortest path to the point, while the number near the path is the weight of that path. This image is from steemit.com [https://steemit.com/popularscience/@krishtopa/dijkstra-s-algorithm-of-finding-optimal-paths].<br />
<br />
[[File:Dijkstra_EMC3_2019.gif|center|Visualization Dijkstra algorithm|frame]]<br />
<br />
In the software, the point map is represented as a matrix. This matrix contains the distance from each point to each other point. The figure below show a matrix. In this matrix, point d[1,n] is the weight from point 1 to point n. Value d[n,1] will be the same, as this is the distance from point n to point 1. If, for example, point 1 and point two would not be connected, the value in the matrix would be d[1,2] = d[2,1] = 0.<br />
The weight of the path is represented by an number bigger than 0. The larger the number, the more difficult the path. The diagonal of the matrix contains the distances from each point to itself. This value should always be zero (d[n,n] = 0).<br />
<br />
<br />
[[File:Pointmap_matrix.PNG|200px|center|thumb|Point map matrix]]<br />
<br />
Based on the mentioned properties of the point matrix, it can be concluded that this matrix should always be symmetric and always have only zeroes in the diagonal. these facts can be used when to check whether the inputted map is correct and does not contain errors. Furthermore, editing the point matrix is simple. If a path between point n and point m needs to be added or removed, all that needs to be done is change the values such that d[m,n] = d[n,m] = w, where w is the new weight of that path. if a path needs to be removed, w should be equal to 0.<br />
<br />
==== Target-based driving ====<br />
Now that both absolute positions of PICO and the next pathpoint are known on the map, the robot needs to know in which direction to drive. That means that the location of the pathpoint that PICO should drive to, needs to be transformed to the coordinate space of PICO. This way it can be figured out in which relative direction PICO should drive and how much PICO should rotate before driving. The results of the calculations is visualised in the image below.<br />
<br />
[[File:Coordinate_transform_EMC3_2019.png|center|Diagram of coordinate space transformation parameters|500px|thumb]]<br />
<br />
The first step is to determine the difference vector [[File:V_delta_EMC3_2019.png|frameless|upright=0.1]] between the position of the pathpoint [[File:V_point_EMC3_2019.png|frameless|upright=0.2]] and the position of PICO [[File:V_pico_EMC3_2019.png|frameless|upright=0.2]]. This is done by a simple subtraction:<br />
<br />
[[File:V_delta_calc_EMC3_2019.png|center|frameless|upright=0.75]]<br />
<br />
Then the coordinate space matrix of PICO [[File:S_pico_EMC3_2019.png|frameless|upright=0.2]] needs to be determined in order to create the transition matrix. This is done by using the absolute rotation of PICO [[File:Th_pico_EMC3_2019.png|frameless|upright=0.2]] in order to calculate the unit vector components of the x- and y-axes of the PICO coordinate space within the absolute coordinate space.<br />
<br />
[[File:S_pico_calc_EMC3_2019.png|center|frameless|upright=1.5]]<br />
<br />
Then the relative position vector of the pathpoint in PICO's coordinate space [[File:V_point_pico_EMC3_2019.png|frameless|upright=0.2]] can be calculated by multiplying the inverse of the PICO space matrix with the difference vector:<br />
<br />
[[File:V_point_pico_calc_EMC3_2019.png|center|frameless|upright=0.75]]<br />
<br />
With the relative coordinates known of the next pathpoint, PICO knows how far to turn in a certain direction before driving and where to aim for when avoiding obstacles.<br />
<br />
==== Visualisation ====<br />
Several visualisation functions were made for the sake of debugging the trajectory of PICO. These functions are built upon the OpenCV framework and are meant to streamline the process of drawing the LRF data, the detected walls, targeted pathpoints, and PICO itself. These functions require a Mat object as an input (pass by reference) and draw their actions on there. This way, all the data can be passed into these functions using world units without the need for converting everything to pixel positions.<br />
<br />
By stacking all the mentioned functions on a single Mat object, the main visualiser of the program was created. A gif of the visualiser in a simulated environment is displayed below. The blue point represents the active target of PICO at any given time. The walls are drawn in white on top of the green LRF data.<br />
<br />
[[File:Visualisation_EMC3_2019.gif|center|Visualisation of the final software|frame]]<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px|center|thumb|Potential field principle]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px|center|thumb|Potential field practical example]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the position data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Test the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Results==<br />
The results from each tests are described in separate parts.<br />
<br />
===Laser Range Finder & Encoders===<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user. However, the actual range is much larger. During the test, laserdata appeared within the 10cm radius of the robot. This data produced false positives and had to be filtered out. The maximum range of the range finder was actually larger than 10 meters, but this does not limit the functionality of the robot. The other properties of the laser range finder were accurate.<br />
<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. Due to the three wheel base orientation of the actuators, the ''x''- and ''y''-direction are less accurate than the rotation.<br />
<br />
===Static Friction===<br />
The actuators have a significant amount of static friction. The exact amount of friction in both translational and the rotational direction was difficult to determine. An attempt was made by slightly increasing the input velocity for a certain direction until the robot began to move. This differed for each test, so the average was taken as a final value for the Drivecontrol. It is important to note that the friction was significantly less for the rotational direction than the translational directions. The ''y''-direction had the most friction and also had the urge to rotate instead of moving in a straight line.<br />
<br />
===Laserdata===<br />
Due to the limited testing moments, some laserdata was recorded in different situations. This data could be used to test the spatial recognition functions outside the testing moments. Data was recorded in different orientations and also during movement of the robot.<br />
<br />
===Drivecontrol Test===<br />
This test was executed to determine if the smoothness and accuracy of the Drivecontrol functions were sufficient or if the acceleration would have to be reduced. Both the smoothness and the accuracy were satisfactory, especially for the rotational movement.<br />
<br />
The potential field was also tested by walking towards the robot when it was driving forward. It successfully evaded the person in all directions while continuously driving forward.<br />
<br />
===Full System Test===<br />
The full system test was executed on the provided example map. However, during this test, no dynamic or static obstacles were present that were not on the map. PICO was able to find the cabinets in the correct order from different starting positions and orientations, but the limited space between the two cabinets provided some difficulties. Since PICO is supposed to return to his previous navigation point if the orientation in front of a cabinet failed, and this point was in between the two cabinets this caused some issues. However, in the Hospital challenge map, no two navigation points were placed as close together as in the example map, which should solve this issue.<br />
<br />
= Conclusion & Recommendations =<br />
In conclusion, the software implementation of the design described in this Wiki is capable of fulfilling the basic functionality of the hospital challenge. That is, if the hospital environment only contains the necessary wall geometry and the cabinets. The addition of static and dynamic obstacles proved difficult to handle by the position estimation code, and ultimately led to the robot miscalculating the orientation at the end of the hospital challenge.<br />
<br />
It is recommended to dedicate the last week before the challenge to testing all the integrated code. This is because the software described in this Wiki was only ever fully implemented during the challenge itself, and the execution proved that the robot was still susceptible to disturbances in the environment in the form of dynamic obstacles. This could have been prevented by fine-tuning several variables in the code. Secondly, it is recommended to dedicate the most time and resources to the position estimation, as this is a crucial element that is relied upon for decision-making in the state machine and calculating the movement trajectory. The potential field implementation, however, proved very robust and simple to implement. It is therefore recommended to other groups to implement this to avoid collision with static and dynamic obstacles.<br />
<br />
[[File:Pico3th.png|thumb|center|400px|alt=Pico3th|Proud to be in 3rd place [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#The_day_of_the_challenge]]]<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Useful information ==<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77441Embedded Motion Control 2019 Group 32019-06-16T11:13:24Z<p>S136625: /* Testing */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Test the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Results==<br />
The results from each tests are described in separate parts.<br />
<br />
===Laser Range Finder & Encoders===<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user. However, the actual range is much larger. During the test, laserdata appeared within the 10cm radius of the robot. This data produced false positives and had to be filtered out. The maximum range of the range finder was actually larger than 10 meters, but this does not limit the functionality of the robot. The other properties of the laser range finder were accurate.<br />
<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. Due to the three wheel base orientation of the actuators, the ''x''- and ''y''-direction are less accurate than the rotation.<br />
<br />
===Static Friction===<br />
The actuators have a significant amount of static friction. The exact amount of friction in both translational and the rotational direction was difficult to determine. An attempt was made by slightly increasing the input velocity for a certain direction until the robot began to move. This differed for each test, so the average was taken as a final value for the Drivecontrol. It is important to note that the friction was significantly less for the rotational direction than the translational directions. The ''y''-direction had the most friction and also had the urge to rotate instead of moving in a straight line.<br />
<br />
===Laserdata===<br />
Due to the limited testing moments, some laserdata was recorded in different situations. This data could be used to test the spatial recognition functions outside the testing moments. Data was recorded in different orientations and also during movement of the robot.<br />
<br />
===Drivecontrol Test===<br />
This test was executed to determine if the smoothness and accuracy of the Drivecontrol functions were sufficient or if the acceleration would have to be reduced. Both the smoothness and the accuracy were satisfactory, especially for the rotational movement.<br />
<br />
The potential field was also tested by walking towards the robot when it was driving forward. It successfully evaded the person in all directions while continuously driving forward.<br />
<br />
===Full System Test===<br />
The full system test was executed on the provided example map. However, during this test, no dynamic or static obstacles were present that were not on the map. PICO was able to find the cabinets in the correct order from different starting positions and orientations, but the limited space between the two cabinets provided some difficulties. Since PICO is supposed to return to his previous navigation point if the orientation in front of a cabinet failed, and this point was in between the two cabinets this caused some issues. However, in the Hospital challenge map, no two navigation points were placed as close together as in the example map, which should solve this issue.<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77440Embedded Motion Control 2019 Group 32019-06-16T11:13:12Z<p>S136625: /* Testing */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Test the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Results==<br />
The results from each tests are described in separate parts.<br />
<br />
===Laser Range Finder & Encoders===<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user. However, the actual range is much larger. During the test, laserdata appeared within the 10cm radius of the robot. This data produced false positives and had to be filtered out. The maximum range of the range finder was actually larger than 10 meters, but this does not limit the functionality of the robot. The other properties of the laser range finder were accurate.<br />
<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. Due to the three wheel base orientation of the actuators, the ''x''- and ''y''-direction are less accurate than the rotation.<br />
<br />
===Static Friction===<br />
The actuators have a significant amount of static friction. The exact amount of friction in both translational and the rotational direction was difficult to determine. An attempt was made by slightly increasing the input velocity for a certain direction until the robot began to move. This differed for each test, so the average was taken as a final value for the Drivecontrol. It is important to note that the friction was significantly less for the rotational direction than the translational directions. The ''y''-direction had the most friction and also had the urge to rotate instead of moving in a straight line.<br />
<br />
===Laserdata===<br />
Due to the limited testing moments, some laserdata was recorded in different situations. This data could be used to test the spatial recognition functions outside the testing moments. Data was recorded in different orientations and also during movement of the robot.<br />
<br />
===Drivecontrol Test===<br />
This test was executed to determine if the smoothness and accuracy of the Drivecontrol functions were sufficient or if the acceleration would have to be reduced. Both the smoothness and the accuracy were satisfactory, especially for the rotational movement.<br />
<br />
The potential field was also tested by walking towards the robot when it was driving forward. It successfully evaded the person in all directions while continuously driving forward.<br />
<br />
===Full System Test===<br />
The full system test was executed on the provided example map. However, during this test, no dynamic or static obstacles were present that were not on the map. PICO was able to find the cabinets in the correct order from different starting positions and orientations, but the limited space between the two cabinets provided some difficulties. Since PICO is supposed to return to his previous navigation point if the orientation in front of a cabinet failed, and this point was in between the two cabinets this caused some issues. However, in the Hospital challenge map, no two navigation points were placed as close together as in the example map, which should solve this issue.<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77433Embedded Motion Control 2019 Group 32019-06-16T11:07:43Z<p>S136625: /* Testing */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Test the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Results==<br />
The results from each tests are described in separate parts.<br />
<br />
===Laser Range Finder & Encoders===<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user. However, the actual range is much larger. During the test, laserdata appeared within the 10cm radius of the robot. This data produced false positives and had to be filtered out. The maximum range of the range finder was actually larger than 10 meters, but this does not limit the functionality of the robot. The other properties of the laser range finder were accurate.<br />
<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. Due to the three wheel base orientation of the actuators, the ''x''- and ''y''-direction are less accurate than the rotation.<br />
<br />
===Static Friction===<br />
The actuators have a significant amount of static friction. The exact amount of friction in both translational and the rotational direction was difficult to determine. An attempt was made by slightly increasing the input velocity for a certain direction until the robot began to move. This differed for each test, so the average was taken as a final value for the Drivecontrol. It is important to note that the friction was significantly less for the rotational direction than the translational directions. The ''y''-direction had the most friction and also had the urge to rotate instead of moving in a straight line.<br />
<br />
===Laserdata===<br />
Due to the limited testing moments, some laserdata was recorded in different situations. This data could be used to test the spatial recognition functions outside the testing moments. Data was recorded in different orientations and also during movement of the robot.<br />
<br />
===Drivecontrol Test===<br />
This test was executed to determine if the smoothness and accuracy of the Drivecontrol functions were sufficient or if the acceleration would have to be reduced. Both the smoothness and the accuracy were satisfactory, especially for the rotational movement.<br />
<br />
The potential field was also tested by walking towards the robot when it was driving forward. It successfully evaded the person in all directions while continuously driving forward.<br />
<br />
===Full System Test===<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77416Embedded Motion Control 2019 Group 32019-06-16T10:52:18Z<p>S136625: /* Testing */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Test the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Results==<br />
The results from each tests are described in separate parts.<br />
<br />
===Laser Range Finder & Encoders===<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user. However, the actual range is much larger. During the test, laserdata appeared within the 10cm radius of the robot. This data produced false positives and had to be filtered out. The maximum range of the range finder was actually larger than 10 meters, but this does not limit the functionality of the robot. The other properties of the laser range finder were accurate.<br />
<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. Due to the three wheel base orientation of the actuators, the ''x''- and ''y''-direction are less accurate than the rotation.<br />
<br />
===Static Friction===<br />
<br />
<br />
===Laserdata===<br />
<br />
<br />
===Drivecontrol Test===<br />
<br />
<br />
===Full System Test===<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77408Embedded Motion Control 2019 Group 32019-06-16T10:32:53Z<p>S136625: /* Test Goals */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Determine the properties of the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Collect laserdata for the spatial recognition functions.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field.<br />
* Test the full system on the example map.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77405Embedded Motion Control 2019 Group 32019-06-16T10:30:43Z<p>S136625: /* Test Goals */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Determine the properties of the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field<br />
* Test the full system on the example map.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77404Embedded Motion Control 2019 Group 32019-06-16T10:27:47Z<p>S136625: /* Goal */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Test Goals==<br />
Several tests were executed during the course of the project, each with a different goal. The most important goals have been summarised below:<br />
<br />
* Determine the properties of the laser range finder and the encoders.<br />
* Determine the static friction in the actuators for the x- and y-direction, and the rotation.<br />
* Test the Drivecontrol functionality, consisting of the S-curve implementation and the potential field<br />
*<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77394Embedded Motion Control 2019 Group 32019-06-16T09:54:46Z<p>S136625: /* Drivecontrol */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77393Embedded Motion Control 2019 Group 32019-06-16T09:54:22Z<p>S136625: /* World model block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
==== Visualisation ====<br />
* Hier nog een stukje schrijven over het visualiseren van de laserdata, PICO zelf en zijn beoogde doel?<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77389Embedded Motion Control 2019 Group 32019-06-16T09:52:21Z<p>S136625: /* Drivecontrol */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
Although the potential field prevents collision with obstacles, it also pushes PICO off course. To make sure that PICO still reaches its goal, an orientation correction was implemented. This function uses the odometry data to calculate PICO's orientation relative to its current goal. If this orientation differs from the desired orientation, namely that PICO looks directly at its goal, the difference in angle is corrected. Although the odometry data is not very accurate, it is sufficient for this purpose, since PICO navigates from point to point and resets its odometry data when it reaches the next point.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77387Embedded Motion Control 2019 Group 32019-06-16T09:44:23Z<p>S136625: /* Drivecontrol */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of an example potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
However, the implementation used for PICO does not use an attraction field, only repulsion from obstacles. The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77386Embedded Motion Control 2019 Group 32019-06-16T09:42:09Z<p>S136625: /* Control block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains the actuator control, called Drivecontrol. This block provides output to the actuators based on inputs from the Worldmodel. <br />
<br />
==== Drivecontrol ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77383Embedded Motion Control 2019 Group 32019-06-16T09:39:31Z<p>S136625: /* Drivetrain */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. The S-curve implementation was chosen to limit Jerk in the robot's movement and thus preventing slip. The reduction of slip in the motion of PICO increases the accuracy of its movement on top of the fluency. The S-curve is implemented in two different functions; the function 'Drive' accelerates and decelerates smoothly to a certain speed or rotation in any direction. The second function 'Drive distance' accurately accelerates and decelerates over a fixed distance or rotation. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77379Embedded Motion Control 2019 Group 32019-06-16T09:32:13Z<p>S136625: /* Introduction */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Hospital challenge involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77378Embedded Motion Control 2019 Group 32019-06-16T09:31:36Z<p>S136625: /* System Design */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. PICO<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77377Embedded Motion Control 2019 Group 32019-06-16T09:31:02Z<p>S136625: /* Introduction */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. PICO<br />
<br />
The wiki is subdivided in the following parts: Firstly, the approach for the Escape room challenge is explained and evaluated. The second topic is the approach and evaluation of the Hospital challenge. This is followed by a full description of the system architecture used to perform the Hospital challenge. After the system architecture, the most important tests and test results are explained. Lastly, a conclusion and recommendation is provided.<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77376Embedded Motion Control 2019 Group 32019-06-16T09:25:33Z<p>S136625: /* Introduction */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
This wiki page describes the approach and process of group 3 concerning the Escape room challenge and the Hospital challenge with the PICO robot. The wiki is subdivided in the following parts: Firstly, the<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77063Embedded Motion Control 2019 Group 32019-06-12T15:23:23Z<p>S136625: /* Conclusion & Recommendations */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
*Mike/Kevin<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77062Embedded Motion Control 2019 Group 32019-06-12T15:22:38Z<p>S136625: /* Testing */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
<br />
*Job<br />
<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77061Embedded Motion Control 2019 Group 32019-06-12T15:22:09Z<p>S136625: /* Drivetrain */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
*Job<br />
<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77060Embedded Motion Control 2019 Group 32019-06-12T15:21:49Z<p>S136625: /* World model block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin/Mike<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77059Embedded Motion Control 2019 Group 32019-06-12T15:21:24Z<p>S136625: /* Planner block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77058Embedded Motion Control 2019 Group 32019-06-12T15:21:11Z<p>S136625: /* World model block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
*Kevin<br />
<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77057Embedded Motion Control 2019 Group 32019-06-12T15:20:59Z<p>S136625: /* Monitor block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
<br />
*Collin<br />
<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77056Embedded Motion Control 2019 Group 32019-06-12T15:20:44Z<p>S136625: /* Wall finding algorithm */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
*mike<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77055Embedded Motion Control 2019 Group 32019-06-12T15:20:23Z<p>S136625: /* Detection */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
*Collin dijkstra shit<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77054Embedded Motion Control 2019 Group 32019-06-12T15:20:00Z<p>S136625: /* System architecture */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
*Yves afmaken<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77053Embedded Motion Control 2019 Group 32019-06-12T15:19:25Z<p>S136625: /* Reflection */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
*Kevin + gifjes enzovoorts<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77052Embedded Motion Control 2019 Group 32019-06-12T15:19:02Z<p>S136625: /* Escape room challenge */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
*Yves doet een gifje<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77051Embedded Motion Control 2019 Group 32019-06-12T15:18:48Z<p>S136625: /* State chart */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== Approach ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77050Embedded Motion Control 2019 Group 32019-06-12T15:18:12Z<p>S136625: /* Introduction */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
*Job*<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77022Embedded Motion Control 2019 Group 32019-06-12T13:20:44Z<p>S136625: /* Hospital Competition */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Reflection ==<br />
TBD<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77021Embedded Motion Control 2019 Group 32019-06-12T13:20:22Z<p>S136625: /* Perception block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Detection ===<br />
<br />
==== Path planning ====<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77020Embedded Motion Control 2019 Group 32019-06-12T13:19:32Z<p>S136625: /* Path planning */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77019Embedded Motion Control 2019 Group 32019-06-12T13:19:15Z<p>S136625: /* Perception block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Wall finding algorithm ===<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77018Embedded Motion Control 2019 Group 32019-06-12T13:18:45Z<p>S136625: /* Wall finding algorithm */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77017Embedded Motion Control 2019 Group 32019-06-12T13:17:18Z<p>S136625: /* Monitor block */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77016Embedded Motion Control 2019 Group 32019-06-12T13:16:49Z<p>S136625: /* State Machine */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77014Embedded Motion Control 2019 Group 32019-06-12T13:13:38Z<p>S136625: /* Minutes */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:<br />
[[:Media:Minutes_Group_3.pdf|Minutes]]</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77013Embedded Motion Control 2019 Group 32019-06-12T13:12:55Z<p>S136625: /* Appendices */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Minutes ==<br />
<br />
This document contains the minutes of all meetings:</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=File:Minutes_Group_3.pdf&diff=77012File:Minutes Group 3.pdf2019-06-12T13:12:16Z<p>S136625: Minutes group 3</p>
<hr />
<div>Minutes group 3</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77008Embedded Motion Control 2019 Group 32019-06-12T13:06:41Z<p>S136625: /* Appendices */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77007Embedded Motion Control 2019 Group 32019-06-12T13:05:26Z<p>S136625: /* Hospital Competition */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Week 2 - 1 May ==<br />
''Notes taken by Mike.''<br />
<br />
Every following meeting requires concrete goals in order for the process to make sense. An agenda is welcome, though it does not need to be as strict as the ones used in DBL projects. The main goal of this meeting is to get to know the expectations of the design document that needs to be handed in next monday, and which should be presented next wednesday. These and other milestones, as well as intermediate goals, are to be described in a week-based planning in this Wiki.<br />
<br />
=== Design document ===<br />
The focus of this document lies on the process of making the robot software succeed the escape room competition and the final competition. It requires a functional decomposition of both competitions. The design document should be written out in both the Wiki and a PDF document that is to be handed in on monday the 6th of May. This document is a mere snapshot of the actual design document, which grows and improves over time. That's what this Wiki is for. The rest of this section contains brainstormed ideas for each section of the design document.<br />
<br />
Requirements:<br />
* The entire software runs on one executable on the robot;<br />
* The robot is to autonomously drive itself out of the escape room;<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition;<br />
* The robot has five minutes to get out of the escape room;<br />
* The robot may not stand still for more than 30 seconds.<br />
<br />
Functions:<br />
* Detecting walls;<br />
* Moving;<br />
* Processing the odometry data;<br />
* Following walls;<br />
* Detecting doorways (holes in the wall).<br />
<br />
Components:<br />
* The drivetrain;<br />
* The laser rangefinder.<br />
<br />
Specifications:<br />
* Dimensions of the footprint of the robot, which is the widest part of the robot;<br />
* Maximum speed: 0.5 m/s translation and 1.2 rad/s rotation.<br />
<br />
Interfaces:<br />
* Gitlab connection for pulling the latest software build;<br />
* Ethernet connection to hook the robot up to a notebook to perform the above.<br />
<br />
=== Measurement plan ===<br />
The first two time slots next tuesday have been reserved for us in order to get to know the robot. Everyone who is able to attend is expected to attend. In order for the time to be used efficiently, code is to be written to perform tests that follows from a measurement plan. This plan involves testing the limits of the laser rangefinder, such as the maximum distance that the laser can detect, as well as the field of view and the noise level of the data.<br />
<br />
=== Software design ===<br />
The overall thinking process of the robot software needs to be determined in a software design. This involves a state chart diagram that depicts the global functioning of the robot during the escape room competition. This can be tested with code using the simulator of the robot with a map that resembles the escape room layout.<br />
<br />
=== Tasks ===<br />
Collin and Mike: write the design document and make it available to the group members by saturday.<br />
<br />
Kevin and Job: write a test plan with test code for the experiment session next tuesday.<br />
<br />
Yves: draft an initial global software design and make a test map of the escape room for the simulation software.<br />
<br />
== Week 3 - 8 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 8th of May.<br />
<br />
=== Strategy ===<br />
A change was made to the strategy of the Escape Room Challenge. The new strategy is in two parts, a plan A and a plan B. First the robot will perform plan A. If this strategy fails, plan B will be performed. Plan A is to make the robot perform a laser scan, than rotate the robot 180 degrees and perform another scan. This gives the robot information about the entire room. From this information the software will be able to locate doorway and escape the room. This plan may not work, since the doorway may be too far away from the laser to detect it, or the software may not be able to detect the doorway. Therefore, plan B exists. This strategy is to drive the robot to the a wall of the room. Then the wall will be followed right hand side, until the robot crosses the finish.<br />
<br />
=== Presentation ===<br />
A Powerpoint presentation was prepared by Kevin for the lecture that afternoon. A few remarks on the presentation were:<br />
* Add the 'Concept system architecture', modyfied to have a larger font.<br />
* Add 'Communicating the state of the software' as a function<br />
* Keep the assignment explanation and explanation of the robot hardware short<br />
<br />
=== Concept system architecture ===<br />
The concept system architecture was made by Yves. The diagram should be checked on its english, since some sentences are unclear. A few changes were made to the spelling. The content of the contentremained mostly the same.<br />
<br />
=== Measuerment results ===<br />
The first test with the robot did not go smoothly. Connecting with the robot showed more difficult than expected. When the test program was run, it was discovered that the Laser Sensor contained a lot of noise. A test situation, like the escape room, was made and all the data from the robot was recorded and saved. From this data, a algorithem can be desiged to condition the sensor data. The data can also be used for the Spatial Feature Recognition.<br />
<br />
=== Tasks ===<br />
The task to be finished for next meeting:<br />
* Spatial Feature Recognition and Monitoring: Mike, Yves<br />
* Laser Range Finder data conditioning: Collin<br />
* Control: Job<br />
* Detailed software design for Escape Room Challenge: Kevin (Deadline: 9/5/2019)<br />
<br />
<br />
The next robot reservations are:<br />
* Tuesday 14/5/2019, from 10:45<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 15/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 4 - 15 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 15th of May.<br />
<br />
=== Escape Room Challenge ===<br />
The test of the software for the Escape Room Challenge was succesfull. Small changes have been made to the code regarding the currents state of the software being shown on the terminal. Also, the distance between the robot and the wall has been increased and the travel velocity of the robot has been decreased.<br />
A state machine has been made and put on the Wiki which describes the software.<br />
<br />
=== Wall detection ===<br />
A Split and Merge algorithm has been develped in Matlab. It can detect walls and corners. The algorithm needs to be further tested and developed. Futhermore, a algorithm needs to be developed to use the information from the split and merge to find the position of the robot on the map. The current plan is to use a Kalman-filter. This needs to be further developed.<br />
<br />
=== Drive Control ===<br />
The function to smoothly accelerate and decelerate the robot is not yet finished. Once the function has been swown to work in the simulation, it can be tested on the robot. This will be either Thursday 16th of May or the Tuesday after.<br />
<br />
In order to succed in the final challenge, better agreements and stricter deadlines need to ben made and followed by the group.<br />
<br />
=== Tasks ===<br />
*Yves: Filter double points from 'Merge and split' algoritm.<br />
*Mike: Develop the architecture for the C++ project.<br />
*Job: Code a function for the S-curve acceleration for x ,y direction and z rotation.<br />
*Kevin: Develop Kallman-filter to compare the data from 'Merge and split' with a map.<br />
*Collin: Develop a finite state machine for the final challenge<br />
<br />
The next robot reservations are:<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 22/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 5 - 22 May ==<br />
''Notes taken by Kevin.''<br />
<br />
These are the notes from the group meeting on 22th of May.<br />
<br />
=== Finite State Machine and Path planning ===<br />
Collin created a Finite state machine of the hospital challenge. The FSM is a pretty complete picture of the hospital challenge, but a different ‘main’ FSM needs to be made in which the actions of the robot itself are shown in a clear manner. Collin also came up with a path planning method. In this method important point are selected on the given map, which will be connected with each other where this is possible. The robot will then be able to drive from point to point in a straight line. If some time is left, we could eventually improve to robot by letting it drive between point in a more smooth manner.<br />
<br />
=== Wall detection ===<br />
Yves has continued working on the split and merge algorithm. He has tried to implement his matlab implementation in C++, but this has proved to be more difficult than anticipated. He will continue working on this.<br />
<br />
=== Drive Control ===<br />
Job has continued working on the drive control, which is now almost finished. Some tests needs to be done on the real robot to see if it is functioning properly in real life as well. Furthermore, the velocity in the drive control needs to be limited, as it is still unbounded at this time. <br />
<br />
=== Architecture ===<br />
Mike has worked on creating the overall architecture of the robot. All the other contributions can then be placed in the correct position in this architecture. <br />
<br />
=== Spatial Awareness ===<br />
Kevin has worked on the kallman filter and spatial recognition. His idea is to first combine the state prediction and odometry information within a kallman filter to give a estimated position and orientation. This estimation can then be combined with laser range data to correct for any remaining mistakes in the estimation. <br />
<br />
=== Last Robot reservation ===<br />
During this reservation we were finally able to quickly set up the laptop and the robot for the first time without any issues. During this test we collected a lot of data by putting the robot in different positions in a constructed room, and saving all the laser range data in a rosbag file. Most of the data is static, with the robot standing still, but we also got some data in which the robot drives forward and backwards a little bit.<br />
<br />
=== Next robot reservations ===<br />
The next reservation is Thursday may 23. During this reservation we will have two hours to test the drive control made by Job. Particular attention will be given to static friction and the maximum possible acceleration of the robot. Furthermore, since we want to implement multiple threads in our program, we would like to know how much the robot can handle in real life. As such, a stress test will be made to see how much the robot can handle. The reservation for next week will be made on Wednesday may 29, on the 3th and 4th hour.<br />
<br />
=== Tasks ===<br />
*Job: finish drive control and integrate it in the architecture. Also create a main FSM with Collin.<br />
*Kevin: Work on a implementation of the kallman filter and spatial recognition software.<br />
*Collin: Continue working on path planning implementation. Also create a main FSM with Job.<br />
*Yves: Continue working on the C++ implementation of split and merge. Also look into speak functions of robot.<br />
*Mike: Work on collision detection and working on creating multiple threads.<br />
*Everyone: Read old wiki's of other groups to get some inspiration.<br />
<br />
<br />
Next meeting: Wednesday 29/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 6 - 29 May ==<br />
''Notes taken by Job''<br />
<br />
These are the notes from the group meeting of the 29th of May.<br />
<br />
=== Progress ===<br />
There has been little integration of functions and everyone has kept working on their separate tasks. It is vital to write the state machine in code so the different functions can be implemented and tasks that still need to be completed can be found.<br />
<br />
Mike has worked on the potential field implementation and has achieved a working state for this function. This function needs to be expanded with a correction for the orientation of the robot.<br />
<br />
Yves has worked on the spatial recognition integration of the Ransac function. This needs to be finished so it can be used for the Kalmann filter Kevin has worked on.<br />
<br />
Kevin needs the work from Yves to finish the Kalmann filter and needs to add a rotation correction.<br />
<br />
Collin has worked on the shortest path algorithm which is ready to be used.<br />
<br />
Job has improved the Drivecontrol functions after last weeks test session and discussed the integration with the potential fields with Mike.<br />
<br />
=== Planning ===<br />
Since time is running short, hard deadlines have been set for the different tasks:<br />
<br />
*State machine (+ speech function integration) - 02-06-2019, 22.00 - Collin + Job (+ Mike)<br />
*Kalmann filter - 04-06-2019, 22.00 - Kevin + Yves<br />
*Presentation - 04-06-2019, 22.00 - Kevin<br />
*Driving - 05-06-2019, 22.00 - Mike + Job<br />
*Cabinet procedure - 02-06-2019, 22.00 - Collin + Job<br />
*Map + Nav-points - 05-06-2019, 22.00 - Yves<br />
*Visualisation OpenCV - Extra task, TBD<br />
<br />
=== Test on Wednesday 14.30 - 15.25 ===<br />
*Test spatial recognition<br />
<br />
=== Test on Thursday 13.30 - 15.25 ===<br />
*Driving + Map<br />
*Cabinet procedure<br />
*Total sequence<br />
<br />
== Week 7 - 6 June ==<br />
''Notes taken by Mike''<br />
=== Progress ===<br />
Kevin has been working on the presentation and the perception functions that fit the map on the detected features. Simple tests suggest that it works by manually feeding the functions with made up realistic points, as well as random points that need to be ignored by the function. For some reason the code does not execute repeatedly though. Either way, the code requires some fine-tuning. This function takes the estimated robot position (odometry) and the LRF data as inputs and has the corrected position as an output. It needs to be extended to take the previous navpoint as the origin of movement. Kevin expects this to be ready for testing by tomorrow's session.<br />
<br />
The presentation is almost done. The architecture slide needs to be simplified to prevent an overwhelming amount of information being visible on screen. The same goes for the state machine.<br />
<br />
Collin has integrated the state machine as much as possible. More public functions need to be made in the WorldModel object that allows the state machine to check whether the program can progress to any following state or not.<br />
<br />
Job and Mike were working on the DriveControl object. The current challenge is driving from point to point. This involves correcting the angle when it deviates from its straight trajectory as a result of the potential vector pushing it away. This is going to be tested in the testing session after this meeting. It also requires implementing the relative position of the end point of the current line trajectory, from Kevin's position estimation function.<br />
<br />
Yves (absent) has been working on implementing the published world map and supplying it with navpoints.<br />
<br />
=== Planning ===<br />
Thursday 6-6: appointment to work together in Gemini South OGO 0 from 9:00 to 10:45, then in Gemini South 4.23 until 12:30. The testing session is from 13:30 until 15:25. The plan is to attempt to integrate ''everything'' before this session to simply test as much as possible.<br />
<br />
Tuesday 11-6: appointment to work together from 8:45 until the testing session from 9:45 until 10:40. The entire code ''should'' be done by now. After the testing session, everything should be fine-tuned in the simulation environment.</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77006Embedded Motion Control 2019 Group 32019-06-12T13:05:00Z<p>S136625: /* Escape room challenge */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Week 2 - 1 May ==<br />
''Notes taken by Mike.''<br />
<br />
Every following meeting requires concrete goals in order for the process to make sense. An agenda is welcome, though it does not need to be as strict as the ones used in DBL projects. The main goal of this meeting is to get to know the expectations of the design document that needs to be handed in next monday, and which should be presented next wednesday. These and other milestones, as well as intermediate goals, are to be described in a week-based planning in this Wiki.<br />
<br />
=== Design document ===<br />
The focus of this document lies on the process of making the robot software succeed the escape room competition and the final competition. It requires a functional decomposition of both competitions. The design document should be written out in both the Wiki and a PDF document that is to be handed in on monday the 6th of May. This document is a mere snapshot of the actual design document, which grows and improves over time. That's what this Wiki is for. The rest of this section contains brainstormed ideas for each section of the design document.<br />
<br />
Requirements:<br />
* The entire software runs on one executable on the robot;<br />
* The robot is to autonomously drive itself out of the escape room;<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition;<br />
* The robot has five minutes to get out of the escape room;<br />
* The robot may not stand still for more than 30 seconds.<br />
<br />
Functions:<br />
* Detecting walls;<br />
* Moving;<br />
* Processing the odometry data;<br />
* Following walls;<br />
* Detecting doorways (holes in the wall).<br />
<br />
Components:<br />
* The drivetrain;<br />
* The laser rangefinder.<br />
<br />
Specifications:<br />
* Dimensions of the footprint of the robot, which is the widest part of the robot;<br />
* Maximum speed: 0.5 m/s translation and 1.2 rad/s rotation.<br />
<br />
Interfaces:<br />
* Gitlab connection for pulling the latest software build;<br />
* Ethernet connection to hook the robot up to a notebook to perform the above.<br />
<br />
=== Measurement plan ===<br />
The first two time slots next tuesday have been reserved for us in order to get to know the robot. Everyone who is able to attend is expected to attend. In order for the time to be used efficiently, code is to be written to perform tests that follows from a measurement plan. This plan involves testing the limits of the laser rangefinder, such as the maximum distance that the laser can detect, as well as the field of view and the noise level of the data.<br />
<br />
=== Software design ===<br />
The overall thinking process of the robot software needs to be determined in a software design. This involves a state chart diagram that depicts the global functioning of the robot during the escape room competition. This can be tested with code using the simulator of the robot with a map that resembles the escape room layout.<br />
<br />
=== Tasks ===<br />
Collin and Mike: write the design document and make it available to the group members by saturday.<br />
<br />
Kevin and Job: write a test plan with test code for the experiment session next tuesday.<br />
<br />
Yves: draft an initial global software design and make a test map of the escape room for the simulation software.<br />
<br />
== Week 3 - 8 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 8th of May.<br />
<br />
=== Strategy ===<br />
A change was made to the strategy of the Escape Room Challenge. The new strategy is in two parts, a plan A and a plan B. First the robot will perform plan A. If this strategy fails, plan B will be performed. Plan A is to make the robot perform a laser scan, than rotate the robot 180 degrees and perform another scan. This gives the robot information about the entire room. From this information the software will be able to locate doorway and escape the room. This plan may not work, since the doorway may be too far away from the laser to detect it, or the software may not be able to detect the doorway. Therefore, plan B exists. This strategy is to drive the robot to the a wall of the room. Then the wall will be followed right hand side, until the robot crosses the finish.<br />
<br />
=== Presentation ===<br />
A Powerpoint presentation was prepared by Kevin for the lecture that afternoon. A few remarks on the presentation were:<br />
* Add the 'Concept system architecture', modyfied to have a larger font.<br />
* Add 'Communicating the state of the software' as a function<br />
* Keep the assignment explanation and explanation of the robot hardware short<br />
<br />
=== Concept system architecture ===<br />
The concept system architecture was made by Yves. The diagram should be checked on its english, since some sentences are unclear. A few changes were made to the spelling. The content of the contentremained mostly the same.<br />
<br />
=== Measuerment results ===<br />
The first test with the robot did not go smoothly. Connecting with the robot showed more difficult than expected. When the test program was run, it was discovered that the Laser Sensor contained a lot of noise. A test situation, like the escape room, was made and all the data from the robot was recorded and saved. From this data, a algorithem can be desiged to condition the sensor data. The data can also be used for the Spatial Feature Recognition.<br />
<br />
=== Tasks ===<br />
The task to be finished for next meeting:<br />
* Spatial Feature Recognition and Monitoring: Mike, Yves<br />
* Laser Range Finder data conditioning: Collin<br />
* Control: Job<br />
* Detailed software design for Escape Room Challenge: Kevin (Deadline: 9/5/2019)<br />
<br />
<br />
The next robot reservations are:<br />
* Tuesday 14/5/2019, from 10:45<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 15/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 4 - 15 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 15th of May.<br />
<br />
=== Escape Room Challenge ===<br />
The test of the software for the Escape Room Challenge was succesfull. Small changes have been made to the code regarding the currents state of the software being shown on the terminal. Also, the distance between the robot and the wall has been increased and the travel velocity of the robot has been decreased.<br />
A state machine has been made and put on the Wiki which describes the software.<br />
<br />
=== Wall detection ===<br />
A Split and Merge algorithm has been develped in Matlab. It can detect walls and corners. The algorithm needs to be further tested and developed. Futhermore, a algorithm needs to be developed to use the information from the split and merge to find the position of the robot on the map. The current plan is to use a Kalman-filter. This needs to be further developed.<br />
<br />
=== Drive Control ===<br />
The function to smoothly accelerate and decelerate the robot is not yet finished. Once the function has been swown to work in the simulation, it can be tested on the robot. This will be either Thursday 16th of May or the Tuesday after.<br />
<br />
In order to succed in the final challenge, better agreements and stricter deadlines need to ben made and followed by the group.<br />
<br />
=== Tasks ===<br />
*Yves: Filter double points from 'Merge and split' algoritm.<br />
*Mike: Develop the architecture for the C++ project.<br />
*Job: Code a function for the S-curve acceleration for x ,y direction and z rotation.<br />
*Kevin: Develop Kallman-filter to compare the data from 'Merge and split' with a map.<br />
*Collin: Develop a finite state machine for the final challenge<br />
<br />
The next robot reservations are:<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 22/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 5 - 22 May ==<br />
''Notes taken by Kevin.''<br />
<br />
These are the notes from the group meeting on 22th of May.<br />
<br />
=== Finite State Machine and Path planning ===<br />
Collin created a Finite state machine of the hospital challenge. The FSM is a pretty complete picture of the hospital challenge, but a different ‘main’ FSM needs to be made in which the actions of the robot itself are shown in a clear manner. Collin also came up with a path planning method. In this method important point are selected on the given map, which will be connected with each other where this is possible. The robot will then be able to drive from point to point in a straight line. If some time is left, we could eventually improve to robot by letting it drive between point in a more smooth manner.<br />
<br />
=== Wall detection ===<br />
Yves has continued working on the split and merge algorithm. He has tried to implement his matlab implementation in C++, but this has proved to be more difficult than anticipated. He will continue working on this.<br />
<br />
=== Drive Control ===<br />
Job has continued working on the drive control, which is now almost finished. Some tests needs to be done on the real robot to see if it is functioning properly in real life as well. Furthermore, the velocity in the drive control needs to be limited, as it is still unbounded at this time. <br />
<br />
=== Architecture ===<br />
Mike has worked on creating the overall architecture of the robot. All the other contributions can then be placed in the correct position in this architecture. <br />
<br />
=== Spatial Awareness ===<br />
Kevin has worked on the kallman filter and spatial recognition. His idea is to first combine the state prediction and odometry information within a kallman filter to give a estimated position and orientation. This estimation can then be combined with laser range data to correct for any remaining mistakes in the estimation. <br />
<br />
=== Last Robot reservation ===<br />
During this reservation we were finally able to quickly set up the laptop and the robot for the first time without any issues. During this test we collected a lot of data by putting the robot in different positions in a constructed room, and saving all the laser range data in a rosbag file. Most of the data is static, with the robot standing still, but we also got some data in which the robot drives forward and backwards a little bit.<br />
<br />
=== Next robot reservations ===<br />
The next reservation is Thursday may 23. During this reservation we will have two hours to test the drive control made by Job. Particular attention will be given to static friction and the maximum possible acceleration of the robot. Furthermore, since we want to implement multiple threads in our program, we would like to know how much the robot can handle in real life. As such, a stress test will be made to see how much the robot can handle. The reservation for next week will be made on Wednesday may 29, on the 3th and 4th hour.<br />
<br />
=== Tasks ===<br />
*Job: finish drive control and integrate it in the architecture. Also create a main FSM with Collin.<br />
*Kevin: Work on a implementation of the kallman filter and spatial recognition software.<br />
*Collin: Continue working on path planning implementation. Also create a main FSM with Job.<br />
*Yves: Continue working on the C++ implementation of split and merge. Also look into speak functions of robot.<br />
*Mike: Work on collision detection and working on creating multiple threads.<br />
*Everyone: Read old wiki's of other groups to get some inspiration.<br />
<br />
<br />
Next meeting: Wednesday 29/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 6 - 29 May ==<br />
''Notes taken by Job''<br />
<br />
These are the notes from the group meeting of the 29th of May.<br />
<br />
=== Progress ===<br />
There has been little integration of functions and everyone has kept working on their separate tasks. It is vital to write the state machine in code so the different functions can be implemented and tasks that still need to be completed can be found.<br />
<br />
Mike has worked on the potential field implementation and has achieved a working state for this function. This function needs to be expanded with a correction for the orientation of the robot.<br />
<br />
Yves has worked on the spatial recognition integration of the Ransac function. This needs to be finished so it can be used for the Kalmann filter Kevin has worked on.<br />
<br />
Kevin needs the work from Yves to finish the Kalmann filter and needs to add a rotation correction.<br />
<br />
Collin has worked on the shortest path algorithm which is ready to be used.<br />
<br />
Job has improved the Drivecontrol functions after last weeks test session and discussed the integration with the potential fields with Mike.<br />
<br />
=== Planning ===<br />
Since time is running short, hard deadlines have been set for the different tasks:<br />
<br />
*State machine (+ speech function integration) - 02-06-2019, 22.00 - Collin + Job (+ Mike)<br />
*Kalmann filter - 04-06-2019, 22.00 - Kevin + Yves<br />
*Presentation - 04-06-2019, 22.00 - Kevin<br />
*Driving - 05-06-2019, 22.00 - Mike + Job<br />
*Cabinet procedure - 02-06-2019, 22.00 - Collin + Job<br />
*Map + Nav-points - 05-06-2019, 22.00 - Yves<br />
*Visualisation OpenCV - Extra task, TBD<br />
<br />
=== Test on Wednesday 14.30 - 15.25 ===<br />
*Test spatial recognition<br />
<br />
=== Test on Thursday 13.30 - 15.25 ===<br />
*Driving + Map<br />
*Cabinet procedure<br />
*Total sequence<br />
<br />
== Week 7 - 6 June ==<br />
''Notes taken by Mike''<br />
=== Progress ===<br />
Kevin has been working on the presentation and the perception functions that fit the map on the detected features. Simple tests suggest that it works by manually feeding the functions with made up realistic points, as well as random points that need to be ignored by the function. For some reason the code does not execute repeatedly though. Either way, the code requires some fine-tuning. This function takes the estimated robot position (odometry) and the LRF data as inputs and has the corrected position as an output. It needs to be extended to take the previous navpoint as the origin of movement. Kevin expects this to be ready for testing by tomorrow's session.<br />
<br />
The presentation is almost done. The architecture slide needs to be simplified to prevent an overwhelming amount of information being visible on screen. The same goes for the state machine.<br />
<br />
Collin has integrated the state machine as much as possible. More public functions need to be made in the WorldModel object that allows the state machine to check whether the program can progress to any following state or not.<br />
<br />
Job and Mike were working on the DriveControl object. The current challenge is driving from point to point. This involves correcting the angle when it deviates from its straight trajectory as a result of the potential vector pushing it away. This is going to be tested in the testing session after this meeting. It also requires implementing the relative position of the end point of the current line trajectory, from Kevin's position estimation function.<br />
<br />
Yves (absent) has been working on implementing the published world map and supplying it with navpoints.<br />
<br />
=== Planning ===<br />
Thursday 6-6: appointment to work together in Gemini South OGO 0 from 9:00 to 10:45, then in Gemini South 4.23 until 12:30. The testing session is from 13:30 until 15:25. The plan is to attempt to integrate ''everything'' before this session to simply test as much as possible.<br />
<br />
Tuesday 11-6: appointment to work together from 8:45 until the testing session from 9:45 until 10:40. The entire code ''should'' be done by now. After the testing session, everything should be fine-tuned in the simulation environment.</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77005Embedded Motion Control 2019 Group 32019-06-12T13:04:39Z<p>S136625: /* System Design */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Week 2 - 1 May ==<br />
''Notes taken by Mike.''<br />
<br />
Every following meeting requires concrete goals in order for the process to make sense. An agenda is welcome, though it does not need to be as strict as the ones used in DBL projects. The main goal of this meeting is to get to know the expectations of the design document that needs to be handed in next monday, and which should be presented next wednesday. These and other milestones, as well as intermediate goals, are to be described in a week-based planning in this Wiki.<br />
<br />
=== Design document ===<br />
The focus of this document lies on the process of making the robot software succeed the escape room competition and the final competition. It requires a functional decomposition of both competitions. The design document should be written out in both the Wiki and a PDF document that is to be handed in on monday the 6th of May. This document is a mere snapshot of the actual design document, which grows and improves over time. That's what this Wiki is for. The rest of this section contains brainstormed ideas for each section of the design document.<br />
<br />
Requirements:<br />
* The entire software runs on one executable on the robot;<br />
* The robot is to autonomously drive itself out of the escape room;<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition;<br />
* The robot has five minutes to get out of the escape room;<br />
* The robot may not stand still for more than 30 seconds.<br />
<br />
Functions:<br />
* Detecting walls;<br />
* Moving;<br />
* Processing the odometry data;<br />
* Following walls;<br />
* Detecting doorways (holes in the wall).<br />
<br />
Components:<br />
* The drivetrain;<br />
* The laser rangefinder.<br />
<br />
Specifications:<br />
* Dimensions of the footprint of the robot, which is the widest part of the robot;<br />
* Maximum speed: 0.5 m/s translation and 1.2 rad/s rotation.<br />
<br />
Interfaces:<br />
* Gitlab connection for pulling the latest software build;<br />
* Ethernet connection to hook the robot up to a notebook to perform the above.<br />
<br />
=== Measurement plan ===<br />
The first two time slots next tuesday have been reserved for us in order to get to know the robot. Everyone who is able to attend is expected to attend. In order for the time to be used efficiently, code is to be written to perform tests that follows from a measurement plan. This plan involves testing the limits of the laser rangefinder, such as the maximum distance that the laser can detect, as well as the field of view and the noise level of the data.<br />
<br />
=== Software design ===<br />
The overall thinking process of the robot software needs to be determined in a software design. This involves a state chart diagram that depicts the global functioning of the robot during the escape room competition. This can be tested with code using the simulator of the robot with a map that resembles the escape room layout.<br />
<br />
=== Tasks ===<br />
Collin and Mike: write the design document and make it available to the group members by saturday.<br />
<br />
Kevin and Job: write a test plan with test code for the experiment session next tuesday.<br />
<br />
Yves: draft an initial global software design and make a test map of the escape room for the simulation software.<br />
<br />
== Week 3 - 8 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 8th of May.<br />
<br />
=== Strategy ===<br />
A change was made to the strategy of the Escape Room Challenge. The new strategy is in two parts, a plan A and a plan B. First the robot will perform plan A. If this strategy fails, plan B will be performed. Plan A is to make the robot perform a laser scan, than rotate the robot 180 degrees and perform another scan. This gives the robot information about the entire room. From this information the software will be able to locate doorway and escape the room. This plan may not work, since the doorway may be too far away from the laser to detect it, or the software may not be able to detect the doorway. Therefore, plan B exists. This strategy is to drive the robot to the a wall of the room. Then the wall will be followed right hand side, until the robot crosses the finish.<br />
<br />
=== Presentation ===<br />
A Powerpoint presentation was prepared by Kevin for the lecture that afternoon. A few remarks on the presentation were:<br />
* Add the 'Concept system architecture', modyfied to have a larger font.<br />
* Add 'Communicating the state of the software' as a function<br />
* Keep the assignment explanation and explanation of the robot hardware short<br />
<br />
=== Concept system architecture ===<br />
The concept system architecture was made by Yves. The diagram should be checked on its english, since some sentences are unclear. A few changes were made to the spelling. The content of the contentremained mostly the same.<br />
<br />
=== Measuerment results ===<br />
The first test with the robot did not go smoothly. Connecting with the robot showed more difficult than expected. When the test program was run, it was discovered that the Laser Sensor contained a lot of noise. A test situation, like the escape room, was made and all the data from the robot was recorded and saved. From this data, a algorithem can be desiged to condition the sensor data. The data can also be used for the Spatial Feature Recognition.<br />
<br />
=== Tasks ===<br />
The task to be finished for next meeting:<br />
* Spatial Feature Recognition and Monitoring: Mike, Yves<br />
* Laser Range Finder data conditioning: Collin<br />
* Control: Job<br />
* Detailed software design for Escape Room Challenge: Kevin (Deadline: 9/5/2019)<br />
<br />
<br />
The next robot reservations are:<br />
* Tuesday 14/5/2019, from 10:45<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 15/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 4 - 15 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 15th of May.<br />
<br />
=== Escape Room Challenge ===<br />
The test of the software for the Escape Room Challenge was succesfull. Small changes have been made to the code regarding the currents state of the software being shown on the terminal. Also, the distance between the robot and the wall has been increased and the travel velocity of the robot has been decreased.<br />
A state machine has been made and put on the Wiki which describes the software.<br />
<br />
=== Wall detection ===<br />
A Split and Merge algorithm has been develped in Matlab. It can detect walls and corners. The algorithm needs to be further tested and developed. Futhermore, a algorithm needs to be developed to use the information from the split and merge to find the position of the robot on the map. The current plan is to use a Kalman-filter. This needs to be further developed.<br />
<br />
=== Drive Control ===<br />
The function to smoothly accelerate and decelerate the robot is not yet finished. Once the function has been swown to work in the simulation, it can be tested on the robot. This will be either Thursday 16th of May or the Tuesday after.<br />
<br />
In order to succed in the final challenge, better agreements and stricter deadlines need to ben made and followed by the group.<br />
<br />
=== Tasks ===<br />
*Yves: Filter double points from 'Merge and split' algoritm.<br />
*Mike: Develop the architecture for the C++ project.<br />
*Job: Code a function for the S-curve acceleration for x ,y direction and z rotation.<br />
*Kevin: Develop Kallman-filter to compare the data from 'Merge and split' with a map.<br />
*Collin: Develop a finite state machine for the final challenge<br />
<br />
The next robot reservations are:<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 22/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 5 - 22 May ==<br />
''Notes taken by Kevin.''<br />
<br />
These are the notes from the group meeting on 22th of May.<br />
<br />
=== Finite State Machine and Path planning ===<br />
Collin created a Finite state machine of the hospital challenge. The FSM is a pretty complete picture of the hospital challenge, but a different ‘main’ FSM needs to be made in which the actions of the robot itself are shown in a clear manner. Collin also came up with a path planning method. In this method important point are selected on the given map, which will be connected with each other where this is possible. The robot will then be able to drive from point to point in a straight line. If some time is left, we could eventually improve to robot by letting it drive between point in a more smooth manner.<br />
<br />
=== Wall detection ===<br />
Yves has continued working on the split and merge algorithm. He has tried to implement his matlab implementation in C++, but this has proved to be more difficult than anticipated. He will continue working on this.<br />
<br />
=== Drive Control ===<br />
Job has continued working on the drive control, which is now almost finished. Some tests needs to be done on the real robot to see if it is functioning properly in real life as well. Furthermore, the velocity in the drive control needs to be limited, as it is still unbounded at this time. <br />
<br />
=== Architecture ===<br />
Mike has worked on creating the overall architecture of the robot. All the other contributions can then be placed in the correct position in this architecture. <br />
<br />
=== Spatial Awareness ===<br />
Kevin has worked on the kallman filter and spatial recognition. His idea is to first combine the state prediction and odometry information within a kallman filter to give a estimated position and orientation. This estimation can then be combined with laser range data to correct for any remaining mistakes in the estimation. <br />
<br />
=== Last Robot reservation ===<br />
During this reservation we were finally able to quickly set up the laptop and the robot for the first time without any issues. During this test we collected a lot of data by putting the robot in different positions in a constructed room, and saving all the laser range data in a rosbag file. Most of the data is static, with the robot standing still, but we also got some data in which the robot drives forward and backwards a little bit.<br />
<br />
=== Next robot reservations ===<br />
The next reservation is Thursday may 23. During this reservation we will have two hours to test the drive control made by Job. Particular attention will be given to static friction and the maximum possible acceleration of the robot. Furthermore, since we want to implement multiple threads in our program, we would like to know how much the robot can handle in real life. As such, a stress test will be made to see how much the robot can handle. The reservation for next week will be made on Wednesday may 29, on the 3th and 4th hour.<br />
<br />
=== Tasks ===<br />
*Job: finish drive control and integrate it in the architecture. Also create a main FSM with Collin.<br />
*Kevin: Work on a implementation of the kallman filter and spatial recognition software.<br />
*Collin: Continue working on path planning implementation. Also create a main FSM with Job.<br />
*Yves: Continue working on the C++ implementation of split and merge. Also look into speak functions of robot.<br />
*Mike: Work on collision detection and working on creating multiple threads.<br />
*Everyone: Read old wiki's of other groups to get some inspiration.<br />
<br />
<br />
Next meeting: Wednesday 29/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 6 - 29 May ==<br />
''Notes taken by Job''<br />
<br />
These are the notes from the group meeting of the 29th of May.<br />
<br />
=== Progress ===<br />
There has been little integration of functions and everyone has kept working on their separate tasks. It is vital to write the state machine in code so the different functions can be implemented and tasks that still need to be completed can be found.<br />
<br />
Mike has worked on the potential field implementation and has achieved a working state for this function. This function needs to be expanded with a correction for the orientation of the robot.<br />
<br />
Yves has worked on the spatial recognition integration of the Ransac function. This needs to be finished so it can be used for the Kalmann filter Kevin has worked on.<br />
<br />
Kevin needs the work from Yves to finish the Kalmann filter and needs to add a rotation correction.<br />
<br />
Collin has worked on the shortest path algorithm which is ready to be used.<br />
<br />
Job has improved the Drivecontrol functions after last weeks test session and discussed the integration with the potential fields with Mike.<br />
<br />
=== Planning ===<br />
Since time is running short, hard deadlines have been set for the different tasks:<br />
<br />
*State machine (+ speech function integration) - 02-06-2019, 22.00 - Collin + Job (+ Mike)<br />
*Kalmann filter - 04-06-2019, 22.00 - Kevin + Yves<br />
*Presentation - 04-06-2019, 22.00 - Kevin<br />
*Driving - 05-06-2019, 22.00 - Mike + Job<br />
*Cabinet procedure - 02-06-2019, 22.00 - Collin + Job<br />
*Map + Nav-points - 05-06-2019, 22.00 - Yves<br />
*Visualisation OpenCV - Extra task, TBD<br />
<br />
=== Test on Wednesday 14.30 - 15.25 ===<br />
*Test spatial recognition<br />
<br />
=== Test on Thursday 13.30 - 15.25 ===<br />
*Driving + Map<br />
*Cabinet procedure<br />
*Total sequence<br />
<br />
== Week 7 - 6 June ==<br />
''Notes taken by Mike''<br />
=== Progress ===<br />
Kevin has been working on the presentation and the perception functions that fit the map on the detected features. Simple tests suggest that it works by manually feeding the functions with made up realistic points, as well as random points that need to be ignored by the function. For some reason the code does not execute repeatedly though. Either way, the code requires some fine-tuning. This function takes the estimated robot position (odometry) and the LRF data as inputs and has the corrected position as an output. It needs to be extended to take the previous navpoint as the origin of movement. Kevin expects this to be ready for testing by tomorrow's session.<br />
<br />
The presentation is almost done. The architecture slide needs to be simplified to prevent an overwhelming amount of information being visible on screen. The same goes for the state machine.<br />
<br />
Collin has integrated the state machine as much as possible. More public functions need to be made in the WorldModel object that allows the state machine to check whether the program can progress to any following state or not.<br />
<br />
Job and Mike were working on the DriveControl object. The current challenge is driving from point to point. This involves correcting the angle when it deviates from its straight trajectory as a result of the potential vector pushing it away. This is going to be tested in the testing session after this meeting. It also requires implementing the relative position of the end point of the current line trajectory, from Kevin's position estimation function.<br />
<br />
Yves (absent) has been working on implementing the published world map and supplying it with navpoints.<br />
<br />
=== Planning ===<br />
Thursday 6-6: appointment to work together in Gemini South OGO 0 from 9:00 to 10:45, then in Gemini South 4.23 until 12:30. The testing session is from 13:30 until 15:25. The plan is to attempt to integrate ''everything'' before this session to simply test as much as possible.<br />
<br />
Tuesday 11-6: appointment to work together from 8:45 until the testing session from 9:45 until 10:40. The entire code ''should'' be done by now. After the testing session, everything should be fine-tuned in the simulation environment.</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77004Embedded Motion Control 2019 Group 32019-06-12T13:04:23Z<p>S136625: /* Hospital Competition */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Week 2 - 1 May ==<br />
''Notes taken by Mike.''<br />
<br />
Every following meeting requires concrete goals in order for the process to make sense. An agenda is welcome, though it does not need to be as strict as the ones used in DBL projects. The main goal of this meeting is to get to know the expectations of the design document that needs to be handed in next monday, and which should be presented next wednesday. These and other milestones, as well as intermediate goals, are to be described in a week-based planning in this Wiki.<br />
<br />
=== Design document ===<br />
The focus of this document lies on the process of making the robot software succeed the escape room competition and the final competition. It requires a functional decomposition of both competitions. The design document should be written out in both the Wiki and a PDF document that is to be handed in on monday the 6th of May. This document is a mere snapshot of the actual design document, which grows and improves over time. That's what this Wiki is for. The rest of this section contains brainstormed ideas for each section of the design document.<br />
<br />
Requirements:<br />
* The entire software runs on one executable on the robot;<br />
* The robot is to autonomously drive itself out of the escape room;<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition;<br />
* The robot has five minutes to get out of the escape room;<br />
* The robot may not stand still for more than 30 seconds.<br />
<br />
Functions:<br />
* Detecting walls;<br />
* Moving;<br />
* Processing the odometry data;<br />
* Following walls;<br />
* Detecting doorways (holes in the wall).<br />
<br />
Components:<br />
* The drivetrain;<br />
* The laser rangefinder.<br />
<br />
Specifications:<br />
* Dimensions of the footprint of the robot, which is the widest part of the robot;<br />
* Maximum speed: 0.5 m/s translation and 1.2 rad/s rotation.<br />
<br />
Interfaces:<br />
* Gitlab connection for pulling the latest software build;<br />
* Ethernet connection to hook the robot up to a notebook to perform the above.<br />
<br />
=== Measurement plan ===<br />
The first two time slots next tuesday have been reserved for us in order to get to know the robot. Everyone who is able to attend is expected to attend. In order for the time to be used efficiently, code is to be written to perform tests that follows from a measurement plan. This plan involves testing the limits of the laser rangefinder, such as the maximum distance that the laser can detect, as well as the field of view and the noise level of the data.<br />
<br />
=== Software design ===<br />
The overall thinking process of the robot software needs to be determined in a software design. This involves a state chart diagram that depicts the global functioning of the robot during the escape room competition. This can be tested with code using the simulator of the robot with a map that resembles the escape room layout.<br />
<br />
=== Tasks ===<br />
Collin and Mike: write the design document and make it available to the group members by saturday.<br />
<br />
Kevin and Job: write a test plan with test code for the experiment session next tuesday.<br />
<br />
Yves: draft an initial global software design and make a test map of the escape room for the simulation software.<br />
<br />
== Week 3 - 8 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 8th of May.<br />
<br />
=== Strategy ===<br />
A change was made to the strategy of the Escape Room Challenge. The new strategy is in two parts, a plan A and a plan B. First the robot will perform plan A. If this strategy fails, plan B will be performed. Plan A is to make the robot perform a laser scan, than rotate the robot 180 degrees and perform another scan. This gives the robot information about the entire room. From this information the software will be able to locate doorway and escape the room. This plan may not work, since the doorway may be too far away from the laser to detect it, or the software may not be able to detect the doorway. Therefore, plan B exists. This strategy is to drive the robot to the a wall of the room. Then the wall will be followed right hand side, until the robot crosses the finish.<br />
<br />
=== Presentation ===<br />
A Powerpoint presentation was prepared by Kevin for the lecture that afternoon. A few remarks on the presentation were:<br />
* Add the 'Concept system architecture', modyfied to have a larger font.<br />
* Add 'Communicating the state of the software' as a function<br />
* Keep the assignment explanation and explanation of the robot hardware short<br />
<br />
=== Concept system architecture ===<br />
The concept system architecture was made by Yves. The diagram should be checked on its english, since some sentences are unclear. A few changes were made to the spelling. The content of the contentremained mostly the same.<br />
<br />
=== Measuerment results ===<br />
The first test with the robot did not go smoothly. Connecting with the robot showed more difficult than expected. When the test program was run, it was discovered that the Laser Sensor contained a lot of noise. A test situation, like the escape room, was made and all the data from the robot was recorded and saved. From this data, a algorithem can be desiged to condition the sensor data. The data can also be used for the Spatial Feature Recognition.<br />
<br />
=== Tasks ===<br />
The task to be finished for next meeting:<br />
* Spatial Feature Recognition and Monitoring: Mike, Yves<br />
* Laser Range Finder data conditioning: Collin<br />
* Control: Job<br />
* Detailed software design for Escape Room Challenge: Kevin (Deadline: 9/5/2019)<br />
<br />
<br />
The next robot reservations are:<br />
* Tuesday 14/5/2019, from 10:45<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 15/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 4 - 15 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 15th of May.<br />
<br />
=== Escape Room Challenge ===<br />
The test of the software for the Escape Room Challenge was succesfull. Small changes have been made to the code regarding the currents state of the software being shown on the terminal. Also, the distance between the robot and the wall has been increased and the travel velocity of the robot has been decreased.<br />
A state machine has been made and put on the Wiki which describes the software.<br />
<br />
=== Wall detection ===<br />
A Split and Merge algorithm has been develped in Matlab. It can detect walls and corners. The algorithm needs to be further tested and developed. Futhermore, a algorithm needs to be developed to use the information from the split and merge to find the position of the robot on the map. The current plan is to use a Kalman-filter. This needs to be further developed.<br />
<br />
=== Drive Control ===<br />
The function to smoothly accelerate and decelerate the robot is not yet finished. Once the function has been swown to work in the simulation, it can be tested on the robot. This will be either Thursday 16th of May or the Tuesday after.<br />
<br />
In order to succed in the final challenge, better agreements and stricter deadlines need to ben made and followed by the group.<br />
<br />
=== Tasks ===<br />
*Yves: Filter double points from 'Merge and split' algoritm.<br />
*Mike: Develop the architecture for the C++ project.<br />
*Job: Code a function for the S-curve acceleration for x ,y direction and z rotation.<br />
*Kevin: Develop Kallman-filter to compare the data from 'Merge and split' with a map.<br />
*Collin: Develop a finite state machine for the final challenge<br />
<br />
The next robot reservations are:<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 22/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 5 - 22 May ==<br />
''Notes taken by Kevin.''<br />
<br />
These are the notes from the group meeting on 22th of May.<br />
<br />
=== Finite State Machine and Path planning ===<br />
Collin created a Finite state machine of the hospital challenge. The FSM is a pretty complete picture of the hospital challenge, but a different ‘main’ FSM needs to be made in which the actions of the robot itself are shown in a clear manner. Collin also came up with a path planning method. In this method important point are selected on the given map, which will be connected with each other where this is possible. The robot will then be able to drive from point to point in a straight line. If some time is left, we could eventually improve to robot by letting it drive between point in a more smooth manner.<br />
<br />
=== Wall detection ===<br />
Yves has continued working on the split and merge algorithm. He has tried to implement his matlab implementation in C++, but this has proved to be more difficult than anticipated. He will continue working on this.<br />
<br />
=== Drive Control ===<br />
Job has continued working on the drive control, which is now almost finished. Some tests needs to be done on the real robot to see if it is functioning properly in real life as well. Furthermore, the velocity in the drive control needs to be limited, as it is still unbounded at this time. <br />
<br />
=== Architecture ===<br />
Mike has worked on creating the overall architecture of the robot. All the other contributions can then be placed in the correct position in this architecture. <br />
<br />
=== Spatial Awareness ===<br />
Kevin has worked on the kallman filter and spatial recognition. His idea is to first combine the state prediction and odometry information within a kallman filter to give a estimated position and orientation. This estimation can then be combined with laser range data to correct for any remaining mistakes in the estimation. <br />
<br />
=== Last Robot reservation ===<br />
During this reservation we were finally able to quickly set up the laptop and the robot for the first time without any issues. During this test we collected a lot of data by putting the robot in different positions in a constructed room, and saving all the laser range data in a rosbag file. Most of the data is static, with the robot standing still, but we also got some data in which the robot drives forward and backwards a little bit.<br />
<br />
=== Next robot reservations ===<br />
The next reservation is Thursday may 23. During this reservation we will have two hours to test the drive control made by Job. Particular attention will be given to static friction and the maximum possible acceleration of the robot. Furthermore, since we want to implement multiple threads in our program, we would like to know how much the robot can handle in real life. As such, a stress test will be made to see how much the robot can handle. The reservation for next week will be made on Wednesday may 29, on the 3th and 4th hour.<br />
<br />
=== Tasks ===<br />
*Job: finish drive control and integrate it in the architecture. Also create a main FSM with Collin.<br />
*Kevin: Work on a implementation of the kallman filter and spatial recognition software.<br />
*Collin: Continue working on path planning implementation. Also create a main FSM with Job.<br />
*Yves: Continue working on the C++ implementation of split and merge. Also look into speak functions of robot.<br />
*Mike: Work on collision detection and working on creating multiple threads.<br />
*Everyone: Read old wiki's of other groups to get some inspiration.<br />
<br />
<br />
Next meeting: Wednesday 29/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 6 - 29 May ==<br />
''Notes taken by Job''<br />
<br />
These are the notes from the group meeting of the 29th of May.<br />
<br />
=== Progress ===<br />
There has been little integration of functions and everyone has kept working on their separate tasks. It is vital to write the state machine in code so the different functions can be implemented and tasks that still need to be completed can be found.<br />
<br />
Mike has worked on the potential field implementation and has achieved a working state for this function. This function needs to be expanded with a correction for the orientation of the robot.<br />
<br />
Yves has worked on the spatial recognition integration of the Ransac function. This needs to be finished so it can be used for the Kalmann filter Kevin has worked on.<br />
<br />
Kevin needs the work from Yves to finish the Kalmann filter and needs to add a rotation correction.<br />
<br />
Collin has worked on the shortest path algorithm which is ready to be used.<br />
<br />
Job has improved the Drivecontrol functions after last weeks test session and discussed the integration with the potential fields with Mike.<br />
<br />
=== Planning ===<br />
Since time is running short, hard deadlines have been set for the different tasks:<br />
<br />
*State machine (+ speech function integration) - 02-06-2019, 22.00 - Collin + Job (+ Mike)<br />
*Kalmann filter - 04-06-2019, 22.00 - Kevin + Yves<br />
*Presentation - 04-06-2019, 22.00 - Kevin<br />
*Driving - 05-06-2019, 22.00 - Mike + Job<br />
*Cabinet procedure - 02-06-2019, 22.00 - Collin + Job<br />
*Map + Nav-points - 05-06-2019, 22.00 - Yves<br />
*Visualisation OpenCV - Extra task, TBD<br />
<br />
=== Test on Wednesday 14.30 - 15.25 ===<br />
*Test spatial recognition<br />
<br />
=== Test on Thursday 13.30 - 15.25 ===<br />
*Driving + Map<br />
*Cabinet procedure<br />
*Total sequence<br />
<br />
== Week 7 - 6 June ==<br />
''Notes taken by Mike''<br />
=== Progress ===<br />
Kevin has been working on the presentation and the perception functions that fit the map on the detected features. Simple tests suggest that it works by manually feeding the functions with made up realistic points, as well as random points that need to be ignored by the function. For some reason the code does not execute repeatedly though. Either way, the code requires some fine-tuning. This function takes the estimated robot position (odometry) and the LRF data as inputs and has the corrected position as an output. It needs to be extended to take the previous navpoint as the origin of movement. Kevin expects this to be ready for testing by tomorrow's session.<br />
<br />
The presentation is almost done. The architecture slide needs to be simplified to prevent an overwhelming amount of information being visible on screen. The same goes for the state machine.<br />
<br />
Collin has integrated the state machine as much as possible. More public functions need to be made in the WorldModel object that allows the state machine to check whether the program can progress to any following state or not.<br />
<br />
Job and Mike were working on the DriveControl object. The current challenge is driving from point to point. This involves correcting the angle when it deviates from its straight trajectory as a result of the potential vector pushing it away. This is going to be tested in the testing session after this meeting. It also requires implementing the relative position of the end point of the current line trajectory, from Kevin's position estimation function.<br />
<br />
Yves (absent) has been working on implementing the published world map and supplying it with navpoints.<br />
<br />
=== Planning ===<br />
Thursday 6-6: appointment to work together in Gemini South OGO 0 from 9:00 to 10:45, then in Gemini South 4.23 until 12:30. The testing session is from 13:30 until 15:25. The plan is to attempt to integrate ''everything'' before this session to simply test as much as possible.<br />
<br />
Tuesday 11-6: appointment to work together from 8:45 until the testing session from 9:45 until 10:40. The entire code ''should'' be done by now. After the testing session, everything should be fine-tuned in the simulation environment.</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=77003Embedded Motion Control 2019 Group 32019-06-12T13:04:07Z<p>S136625: /* Appendices */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= Conclusion & Recommendations =<br />
<br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Week 2 - 1 May ==<br />
''Notes taken by Mike.''<br />
<br />
Every following meeting requires concrete goals in order for the process to make sense. An agenda is welcome, though it does not need to be as strict as the ones used in DBL projects. The main goal of this meeting is to get to know the expectations of the design document that needs to be handed in next monday, and which should be presented next wednesday. These and other milestones, as well as intermediate goals, are to be described in a week-based planning in this Wiki.<br />
<br />
=== Design document ===<br />
The focus of this document lies on the process of making the robot software succeed the escape room competition and the final competition. It requires a functional decomposition of both competitions. The design document should be written out in both the Wiki and a PDF document that is to be handed in on monday the 6th of May. This document is a mere snapshot of the actual design document, which grows and improves over time. That's what this Wiki is for. The rest of this section contains brainstormed ideas for each section of the design document.<br />
<br />
Requirements:<br />
* The entire software runs on one executable on the robot;<br />
* The robot is to autonomously drive itself out of the escape room;<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition;<br />
* The robot has five minutes to get out of the escape room;<br />
* The robot may not stand still for more than 30 seconds.<br />
<br />
Functions:<br />
* Detecting walls;<br />
* Moving;<br />
* Processing the odometry data;<br />
* Following walls;<br />
* Detecting doorways (holes in the wall).<br />
<br />
Components:<br />
* The drivetrain;<br />
* The laser rangefinder.<br />
<br />
Specifications:<br />
* Dimensions of the footprint of the robot, which is the widest part of the robot;<br />
* Maximum speed: 0.5 m/s translation and 1.2 rad/s rotation.<br />
<br />
Interfaces:<br />
* Gitlab connection for pulling the latest software build;<br />
* Ethernet connection to hook the robot up to a notebook to perform the above.<br />
<br />
=== Measurement plan ===<br />
The first two time slots next tuesday have been reserved for us in order to get to know the robot. Everyone who is able to attend is expected to attend. In order for the time to be used efficiently, code is to be written to perform tests that follows from a measurement plan. This plan involves testing the limits of the laser rangefinder, such as the maximum distance that the laser can detect, as well as the field of view and the noise level of the data.<br />
<br />
=== Software design ===<br />
The overall thinking process of the robot software needs to be determined in a software design. This involves a state chart diagram that depicts the global functioning of the robot during the escape room competition. This can be tested with code using the simulator of the robot with a map that resembles the escape room layout.<br />
<br />
=== Tasks ===<br />
Collin and Mike: write the design document and make it available to the group members by saturday.<br />
<br />
Kevin and Job: write a test plan with test code for the experiment session next tuesday.<br />
<br />
Yves: draft an initial global software design and make a test map of the escape room for the simulation software.<br />
<br />
== Week 3 - 8 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 8th of May.<br />
<br />
=== Strategy ===<br />
A change was made to the strategy of the Escape Room Challenge. The new strategy is in two parts, a plan A and a plan B. First the robot will perform plan A. If this strategy fails, plan B will be performed. Plan A is to make the robot perform a laser scan, than rotate the robot 180 degrees and perform another scan. This gives the robot information about the entire room. From this information the software will be able to locate doorway and escape the room. This plan may not work, since the doorway may be too far away from the laser to detect it, or the software may not be able to detect the doorway. Therefore, plan B exists. This strategy is to drive the robot to the a wall of the room. Then the wall will be followed right hand side, until the robot crosses the finish.<br />
<br />
=== Presentation ===<br />
A Powerpoint presentation was prepared by Kevin for the lecture that afternoon. A few remarks on the presentation were:<br />
* Add the 'Concept system architecture', modyfied to have a larger font.<br />
* Add 'Communicating the state of the software' as a function<br />
* Keep the assignment explanation and explanation of the robot hardware short<br />
<br />
=== Concept system architecture ===<br />
The concept system architecture was made by Yves. The diagram should be checked on its english, since some sentences are unclear. A few changes were made to the spelling. The content of the contentremained mostly the same.<br />
<br />
=== Measuerment results ===<br />
The first test with the robot did not go smoothly. Connecting with the robot showed more difficult than expected. When the test program was run, it was discovered that the Laser Sensor contained a lot of noise. A test situation, like the escape room, was made and all the data from the robot was recorded and saved. From this data, a algorithem can be desiged to condition the sensor data. The data can also be used for the Spatial Feature Recognition.<br />
<br />
=== Tasks ===<br />
The task to be finished for next meeting:<br />
* Spatial Feature Recognition and Monitoring: Mike, Yves<br />
* Laser Range Finder data conditioning: Collin<br />
* Control: Job<br />
* Detailed software design for Escape Room Challenge: Kevin (Deadline: 9/5/2019)<br />
<br />
<br />
The next robot reservations are:<br />
* Tuesday 14/5/2019, from 10:45<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 15/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 4 - 15 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 15th of May.<br />
<br />
=== Escape Room Challenge ===<br />
The test of the software for the Escape Room Challenge was succesfull. Small changes have been made to the code regarding the currents state of the software being shown on the terminal. Also, the distance between the robot and the wall has been increased and the travel velocity of the robot has been decreased.<br />
A state machine has been made and put on the Wiki which describes the software.<br />
<br />
=== Wall detection ===<br />
A Split and Merge algorithm has been develped in Matlab. It can detect walls and corners. The algorithm needs to be further tested and developed. Futhermore, a algorithm needs to be developed to use the information from the split and merge to find the position of the robot on the map. The current plan is to use a Kalman-filter. This needs to be further developed.<br />
<br />
=== Drive Control ===<br />
The function to smoothly accelerate and decelerate the robot is not yet finished. Once the function has been swown to work in the simulation, it can be tested on the robot. This will be either Thursday 16th of May or the Tuesday after.<br />
<br />
In order to succed in the final challenge, better agreements and stricter deadlines need to ben made and followed by the group.<br />
<br />
=== Tasks ===<br />
*Yves: Filter double points from 'Merge and split' algoritm.<br />
*Mike: Develop the architecture for the C++ project.<br />
*Job: Code a function for the S-curve acceleration for x ,y direction and z rotation.<br />
*Kevin: Develop Kallman-filter to compare the data from 'Merge and split' with a map.<br />
*Collin: Develop a finite state machine for the final challenge<br />
<br />
The next robot reservations are:<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 22/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 5 - 22 May ==<br />
''Notes taken by Kevin.''<br />
<br />
These are the notes from the group meeting on 22th of May.<br />
<br />
=== Finite State Machine and Path planning ===<br />
Collin created a Finite state machine of the hospital challenge. The FSM is a pretty complete picture of the hospital challenge, but a different ‘main’ FSM needs to be made in which the actions of the robot itself are shown in a clear manner. Collin also came up with a path planning method. In this method important point are selected on the given map, which will be connected with each other where this is possible. The robot will then be able to drive from point to point in a straight line. If some time is left, we could eventually improve to robot by letting it drive between point in a more smooth manner.<br />
<br />
=== Wall detection ===<br />
Yves has continued working on the split and merge algorithm. He has tried to implement his matlab implementation in C++, but this has proved to be more difficult than anticipated. He will continue working on this.<br />
<br />
=== Drive Control ===<br />
Job has continued working on the drive control, which is now almost finished. Some tests needs to be done on the real robot to see if it is functioning properly in real life as well. Furthermore, the velocity in the drive control needs to be limited, as it is still unbounded at this time. <br />
<br />
=== Architecture ===<br />
Mike has worked on creating the overall architecture of the robot. All the other contributions can then be placed in the correct position in this architecture. <br />
<br />
=== Spatial Awareness ===<br />
Kevin has worked on the kallman filter and spatial recognition. His idea is to first combine the state prediction and odometry information within a kallman filter to give a estimated position and orientation. This estimation can then be combined with laser range data to correct for any remaining mistakes in the estimation. <br />
<br />
=== Last Robot reservation ===<br />
During this reservation we were finally able to quickly set up the laptop and the robot for the first time without any issues. During this test we collected a lot of data by putting the robot in different positions in a constructed room, and saving all the laser range data in a rosbag file. Most of the data is static, with the robot standing still, but we also got some data in which the robot drives forward and backwards a little bit.<br />
<br />
=== Next robot reservations ===<br />
The next reservation is Thursday may 23. During this reservation we will have two hours to test the drive control made by Job. Particular attention will be given to static friction and the maximum possible acceleration of the robot. Furthermore, since we want to implement multiple threads in our program, we would like to know how much the robot can handle in real life. As such, a stress test will be made to see how much the robot can handle. The reservation for next week will be made on Wednesday may 29, on the 3th and 4th hour.<br />
<br />
=== Tasks ===<br />
*Job: finish drive control and integrate it in the architecture. Also create a main FSM with Collin.<br />
*Kevin: Work on a implementation of the kallman filter and spatial recognition software.<br />
*Collin: Continue working on path planning implementation. Also create a main FSM with Job.<br />
*Yves: Continue working on the C++ implementation of split and merge. Also look into speak functions of robot.<br />
*Mike: Work on collision detection and working on creating multiple threads.<br />
*Everyone: Read old wiki's of other groups to get some inspiration.<br />
<br />
<br />
Next meeting: Wednesday 29/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 6 - 29 May ==<br />
''Notes taken by Job''<br />
<br />
These are the notes from the group meeting of the 29th of May.<br />
<br />
=== Progress ===<br />
There has been little integration of functions and everyone has kept working on their separate tasks. It is vital to write the state machine in code so the different functions can be implemented and tasks that still need to be completed can be found.<br />
<br />
Mike has worked on the potential field implementation and has achieved a working state for this function. This function needs to be expanded with a correction for the orientation of the robot.<br />
<br />
Yves has worked on the spatial recognition integration of the Ransac function. This needs to be finished so it can be used for the Kalmann filter Kevin has worked on.<br />
<br />
Kevin needs the work from Yves to finish the Kalmann filter and needs to add a rotation correction.<br />
<br />
Collin has worked on the shortest path algorithm which is ready to be used.<br />
<br />
Job has improved the Drivecontrol functions after last weeks test session and discussed the integration with the potential fields with Mike.<br />
<br />
=== Planning ===<br />
Since time is running short, hard deadlines have been set for the different tasks:<br />
<br />
*State machine (+ speech function integration) - 02-06-2019, 22.00 - Collin + Job (+ Mike)<br />
*Kalmann filter - 04-06-2019, 22.00 - Kevin + Yves<br />
*Presentation - 04-06-2019, 22.00 - Kevin<br />
*Driving - 05-06-2019, 22.00 - Mike + Job<br />
*Cabinet procedure - 02-06-2019, 22.00 - Collin + Job<br />
*Map + Nav-points - 05-06-2019, 22.00 - Yves<br />
*Visualisation OpenCV - Extra task, TBD<br />
<br />
=== Test on Wednesday 14.30 - 15.25 ===<br />
*Test spatial recognition<br />
<br />
=== Test on Thursday 13.30 - 15.25 ===<br />
*Driving + Map<br />
*Cabinet procedure<br />
*Total sequence<br />
<br />
== Week 7 - 6 June ==<br />
''Notes taken by Mike''<br />
=== Progress ===<br />
Kevin has been working on the presentation and the perception functions that fit the map on the detected features. Simple tests suggest that it works by manually feeding the functions with made up realistic points, as well as random points that need to be ignored by the function. For some reason the code does not execute repeatedly though. Either way, the code requires some fine-tuning. This function takes the estimated robot position (odometry) and the LRF data as inputs and has the corrected position as an output. It needs to be extended to take the previous navpoint as the origin of movement. Kevin expects this to be ready for testing by tomorrow's session.<br />
<br />
The presentation is almost done. The architecture slide needs to be simplified to prevent an overwhelming amount of information being visible on screen. The same goes for the state machine.<br />
<br />
Collin has integrated the state machine as much as possible. More public functions need to be made in the WorldModel object that allows the state machine to check whether the program can progress to any following state or not.<br />
<br />
Job and Mike were working on the DriveControl object. The current challenge is driving from point to point. This involves correcting the angle when it deviates from its straight trajectory as a result of the potential vector pushing it away. This is going to be tested in the testing session after this meeting. It also requires implementing the relative position of the end point of the current line trajectory, from Kevin's position estimation function.<br />
<br />
Yves (absent) has been working on implementing the published world map and supplying it with navpoints.<br />
<br />
=== Planning ===<br />
Thursday 6-6: appointment to work together in Gemini South OGO 0 from 9:00 to 10:45, then in Gemini South 4.23 until 12:30. The testing session is from 13:30 until 15:25. The plan is to attempt to integrate ''everything'' before this session to simply test as much as possible.<br />
<br />
Tuesday 11-6: appointment to work together from 8:45 until the testing session from 9:45 until 10:40. The entire code ''should'' be done by now. After the testing session, everything should be fine-tuned in the simulation environment.</div>S136625https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019_Group_3&diff=76999Embedded Motion Control 2019 Group 32019-06-12T12:55:48Z<p>S136625: /* System Design */</p>
<hr />
<div>= Group members =<br />
{|<br />
|Collin Bouwens<br />
|<br />
| 1392794<br />
|-<br />
|Yves Elmensdorp<br />
|<br />
| 1393944<br />
|-<br />
|Kevin Jebbink<br />
|<br />
| 0817997<br />
|-<br />
|Mike Mostard<br />
|<br />
| 1387332<br />
|-<br />
|Job van der Velde<br />
|<br />
| 0855969<br />
|}<br />
<br />
= Useful information =<br />
[https://www.robotshop.com/media/files/pdf/gostai-jazz-information-sheet.pdf Robot specs document]<br />
<br />
[http://www.et.byu.edu/~ered/ME537/Notes/Ch5.pdf S-curve equations]<br />
<br />
[[:Media:4SC020_Design_Document_2019_Group_3.pdf|PDF of initial Design Document]]<br />
<br />
= Planning =<br />
{| class="wikitable"<br />
|-<br />
! Week 2<br />
! Week 3<br />
! Week 4<br />
! Week 5<br />
! Week 6<br />
! Week 7<br />
! Week 8<br />
|-<br />
| Wed. 1 May: initial meeting: getting to know the requirements of the design document.<br />
| '''Mon. 6 May: design document handed in by 17:00. Responsibility: Collin and Mike.'''<br />
| '''Wed. 15 May: escape room competition.'''<br />
| <br />
| <br />
| '''Wed. 5 June: final design presentation.'''<br />
| '''Wed. 12 June: final competition.'''<br />
|-<br />
| <br />
| Tue. 7 May: first tests with the robot. Measurement plan and test code is to be made by Kevin and Job.<br />
| Tue. 14 May: Implementing and testing the code for the Escape Room Challenge<br />
| <br />
| <br />
|<br />
| <br />
|-<br />
| <br />
| Wed. 8 May: meeting: discussing the design document and the initial tests, as well as the software design made by Yves.<br />
'''Presentation of the initial design by Kevin during the lecture.'''<br />
| Wed. 15 May: Developing the software design for the Final Challenge<br />
| <br />
|<br />
| <br />
| <br />
|}<br />
<br />
= Introduction =<br />
<br />
= System Design =<br />
This chapter describes the final system design for the hospital challenge. The system design is based on the original Design Document that can be found under Useful Documents.<br />
<br />
The PICO robot is a telepresence robot that is capable of driving around while monitoring its environment. In the Escape Room Competition, the robot is placed somewhere inside a rectangular room with unknown dimensions with one doorway that leads to the finish line. Once the robot crosses the finish line without bumping into walls, the assignment is completed. The Final Competition involves a dynamic hospital-like environment, where the robot is assigned to approach a number of cabinets based on a known map, while avoiding obstacles.<br />
<br />
== Components ==<br />
The PICO robot is a modified version of the ''Jazz'' robot, which is originally developed by Gostai, now part of Aldebaran. The key components of the robot that are relevant to this project are the drivetrain and the laser rangefinder. The drivetrain is holonomic, as it consists of three omni-wheels that allow the robot to translate in any direction without necessarily rotating. This adds the benefit of scanning the environment in a fixed orientation, while moving in any direction. The software framework allows the forward and sideways velocity to be set, as well as the horizontal angular velocity. The framework also approximates the relative position and angle from the starting position.<br />
<br />
The laser rangefinder is a spatial measurement device that is capable of measuring the horizontal distance to any object within a fixed field of view. The software framework measures a finite number of equally distributed angles within the field of view and notifies when new measurement data is available. Using this data, walls and obstacles in the environment of the robot can be detected.<br />
<br />
Lastly, the robot is fitted with loudspeakers and a WiFi connection according to the data sheet of the Jazz robot. This can be useful for interfacing during operation, as described in the 'Interfaces' section. Whether the PICO robot actually has these speakers and the WiFi connectivity remains to be determined.<br />
<br />
== Requirements ==<br />
Different requirement sets have been made for the Escape Room Competition and the Final Competition. The requirements are based on the course descriptions of the competitions and the personal ambitions of the project members. The final software is finished once all the requirements are met.<br />
<br />
The requirements for the Escape Room Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself out of the escape room.<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot has five minutes to get out of the escape room.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
<br />
The requirements for the Final Competition are as follows:<br />
* The entire software runs on one executable on the robot.<br />
* The robot is to autonomously drive itself around in the dynamic hospital.<br />
* The robot may not 'bump' into objects, where 'bumping' is judged by the tutors during the competition.<br />
* The robot may not stand still for more than 30 seconds.<br />
* The robot can visit a variable number of cabinets in the hospital.<br />
* The software will communicate when it changes its state, why it changes its state and to what state it changes.<br />
* The robot navigates based on a provided map of the hospital and data obtained by the laser rangefinder and the odometry data.<br />
<br />
== Functions ==<br />
A list of functions the robot needs to fulfil has been made. Some of these functions are for both competitions, while some are for either the Escape Room or Final Competition. These functions are:<br />
* In general:<br />
** Recognising spatial features;<br />
** Preventing collision;<br />
** Conditioning the odometry data;<br />
** Conditioning the rangefinder data;<br />
** Communicating the state of the software.<br />
* For the Escape Room Competition:<br />
** Following walls;<br />
** Detecting the end of the finish corridor.<br />
* For the Final Competition:<br />
** Moving to points on the map;<br />
** Calculating current position on the map;<br />
** Planning the trajectory to a point on the map;<br />
** Approaching a cabinet based on its location on the map.<br />
<br />
The key function in this project is recognising spatial features. The point of this function is to analyse the rangefinder data in order to detect walls, convex or concave corners, dead spots in the field of view, and gaps in the wall that could be a doorway. This plays a key role during the Escape Room Competition in order to detect the corridor with the finish line in it, and therefore has a priority during the realisation of the software. For this function to work reliably, it is essential that the rangefinder data is analysed for noise during the initial tests. If there is a significant amount of noise, the rangefinder data needs to be conditioned before it is fed into the spatial feature recognition function. As a safety measure, it is important to constantly monitor the spatial features in order to prevent collisions with unexpected obstacles.<br />
<br />
Lastly, the trajectory planning function plays a major role during the Final Competition, as this determines the route that the robot needs to follow in order to get to a specified cabinet. This function needs to take obstacles into account, in case the preferred route is obstructed. This is possible, as the documentation about the Final Competition show a map in which multiple routes lead to a certain cabinet. One of these routes can be blocked, in which case the robot needs to calculate a different route.<br />
<br />
== Specifications ==<br />
The specifications describe important dimensions and limitations of the hardware components of the robot that will be used during the competitions. For each component, the specifications of that components will be given, with a source of where this specification comes from.<br />
<br />
The drivetrain of the robot can move the robot in the ''x'' and ''y'' directions and rotate the robot in the ''z'' direction. The maximum speed of the robot is limited to ''±0.5 m/s'' translation and ''±1.2 rad/s'' rotation. These values are from the Embedded Motion Control Wiki page. The centre of rotation of the drivetrain needs to be known in order to predict the translation of the robot after a rotation. This will be determined with a measurement.<br />
<br />
The dimensions of the footprint of the robot need to be known in order to move the robot through corridors and doorways without collision. The footprint is ''41 cm'' wide and ''35 cm'' deep, according to the Jazz robot datasheet. A measurement will be made to check these dimensions.<br />
<br />
The laser rangefinder will be used to detect and measure the distance to objects in the vicinity of the robot. The measurement distance range of the sensor is from ''0.1 m'' to ''10.0 m'' with a field of view of 229.2°. The range of the sensor is divided into 1000 parts. These values are determined with the PICO simulator and need to be verified with measurements on the real robot.<br />
<br />
== Interfaces ==<br />
The interfacing of the robot determines how the project members interact with the robot in order to set it up for the competitions. It also plays a role during operation, in the way that it interacts with the spectators of the competitions. On the development level there is an Ethernet connection available to the robot. This allows a computer to be hooked up to the robot in order to download the latest version of the software using ''git'', by connecting to the Gitlab repository of the project group. This involves using the ''git pull'' command, which downloads all the content from the repository, including the executable that contains the robot software.<br />
<br />
On the operation level it is important for the robot to communicate the status of the software. This is useful for debugging the software, as well as clarifying the behaviour during the competitions. This can be made possible with the loudspeaker, by recording voice lines that explain what the robot currently senses and what the next step is that it will perform. Not only is this functionally important, but it can also add a human touch to the behaviour of the robot. In case that the PICO robot has been altered to not have loudspeakers, it needs to be determined during testing if the WiFi interface can be utilised in order to print messages in a terminal on a computer that is connected to the robot.<br />
<br />
== System architecture ==<br />
[[File:Concept_RobotArchitecture.png|1000px]]<br />
<br />
=== Perception block ===<br />
The purpose of the perception object is to condition the sensor data. This mainly involves filtering invalid points from the LRF measurements, such that these points cannot pollute the information that is fed into the feature detection algorithm. Such invalid points include points that are erroneously measured at the origin of the sensor, probably as a result of dust on the sensor.<br />
<br />
=== Monitor block ===<br />
The monitor object, as the name implies, monitors the execution of the program. In this object, the state machine is being run. On every tick, it is checked whether the state has fulfilled its exit TODO<br />
<br />
=== World model block ===<br />
Hier komt kevin's shit over spatial recognition enzo... biem<br />
<br />
=== Planner block ===<br />
<br />
=== Control block ===<br />
The control block contains actuator control and any output to the robot interface. <br />
<br />
==== Drivetrain ====<br />
The actuators are controlled such that the movement of the robot is fluent. This is achieved via implementing an S-curve for any velocity change. General information on S-curves can be found via the link under Useful Information.<br />
<br />
Two functions have been constructed, 'Drive' for accelerating or decelerating to a certain speed in any direction, and 'Drive distance' for traveling a certain distance in any direction.<br />
<br />
Drive has been further incorporated in a function that uses a potential field. This function prevents the robot from bumping into objects in a fluent manner. See the figure below for a visual representation of the implementation of a potential field. The leftmost image shows the attraction field to the goal, the middle image shows the repulsion from obstacles and the rightmost image shows the combination of the two. Any wall or object is taken into account for this function.<br />
<br />
[[File:Potential_field.png|1000px]]<br />
<br />
''Image obtained from: [[https://www.ais.uni-bonn.de/papers/ISPRS_nieuw_schad_beh.pdf]]''<br />
<br />
The potential field vector is calculated in real-time, as the robot is expected to run into dynamic obstacles in the final challenge. This also takes the imperfections in the physical environment into account. The way the potential field is obtained is visualised in the figure below.<br />
<br />
[[File:PotentialFieldCalculationSchematic_EMC3_2019.png|1000px]]<br />
<br />
The first image shows how the robot is far away enough from any walls or obstacles, and thus the potential field vector is zero, causing the robot to keep its (straight) trajectory. In the second image, the robot is driving through a narrow corridor. As a result of the symmetry of the environment, the potential field component vectors cancel each other out, causing the potential field sum vector to be zero. Once again, the robot keeps its trajectory. In the third image however, the robot is closer to the left wall, causing the left potential field component vectors to outweigh the right ones. As such, the potential field sum vector points to the right, causing the robot to drive towards the middle of the corridor, until the sum vector reaches its steady state value when the robot is in the middle again. The fourth image depicts a situation where an obstacle, such a random box or a walking person, enters the avoidance region around the robot. Once again, the potential field sum vector points away from the obstacle, causing the robot to drive around the obstacle as depicted by the dotted line.<br />
<br />
= Testing =<br />
This chapter describes the most important tests and test results during this project.<br />
<br />
==Goal==<br />
The goal is to perform the initial setup of the robot and to determine the actual properties of the laser range finder, encoders and drive train. For the laser range finder, these properties consist of the range, angle, sensitivity and amount of noise. The most important property for the encoder is its accuracy. <br />
<br />
The most important properties of the drivetrain are its accuracy, and its maximum translational and rotational acceleration for smooth movement.<br />
<br />
==Simulation results==<br />
The range of the laser range finder according to the simulation is 10cm to 10m, the angle is +114.6 to -114.6 degrees as measured from the front of the robot. This angle is measured in 1000 parts, per an amount of time that can be determined by the user.<br />
<br />
==Execution==<br />
===Initial setup===<br />
The initial setup for connecting with the Pico robot is described on the following wiki page: [[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control/Using_Pico]] <br />
<br />
===Laser range finder===<br />
Two tests can be executed to determine the range, angle and accuracy of the laser range finder. First of all, the output values from the range finder can be saved in a file and compared to actual measured values. The second option is to program the robot drive backward slowly while facing a wall. The program should stop the robot as soon as it does not register the wall anymore. The same can be done while driving forward to determine the minimum range. To determine the angle the robot can be rotated.<br />
<br />
===Encoders===<br />
The values supplied by the encoders are automatically converted to distance in the ''x''- and ''y''-direction and a rotation ''a'' in radians. These can be compared to measured values in order to determine the accuracy.<br />
<br />
===Drive train===<br />
The maximum acceleration of the robot can be determined by finding the amount of time it takes over which the maximum velocity of the robot is achieved in a smooth manner. The maximum translational velocity of the robot is set to 0.5 m/s and the maximum rotational velocity to 1.2 rad/s.<br />
<br />
<br />
==Results==<br />
<br />
= Escape room challenge =<br />
This chapter summarizes the approach for the escape room challenge and offers some reflection on the execution of the challenge.<br />
<br />
== State chart ==<br />
The state chart below depicts the wall following program that the robot is to execute during the escape room challenge. In a nutshell: the robot drives forward until a wall is detected, lines up with said wall to the right, and starts following it by forcing itself to stay between a minimum and a maximum distance to the wall. When something is detected in front, it is assumed that the next wall to follow is found, and thus the robot should rotate 90 degrees counterclockwise so it can start following the next wall. When a gap is detected to the right of the robot, it is assumed that the exit corridor has been found, and thus the robot should turn into the exit. Then the robot keeps following the right wall in the corridor until, once again, a gap is detected to the right of the robot. At this point, the robot should have crossed the finish line.<br />
<br />
[[File:EMC_2019_group3_ER_FSM.png|EMC_2019_group3_ER_FSM.png|1000px]]<br />
<br />
== Reflection ==<br />
Due to a lack of time and more resources being put into the final challenge, the code for the escape room challenge had to be simplified. The original plan was to have the robot scan the environment, identify the exit, and when identified, drive towards the exit and drive to the finish line. In case the robot could not identify the exit, the robot would start following the wall instead, as a robust backup plan. The testing session before the challenge proved to be too short, and only the wall follower could be tested. Therefore, only the wall follower program was executed during the challenge.<br />
<br />
As a precaution to not bump into the walls, we reduced the speed of the robot and increased the distance the robot would keep to the wall by modifying the config file in the software. Although our program did succeed the challenge, we were the slowest performing group as a result of the named modifications to the configuration. We felt however that these modifications were worth the slowdown and proved the robustness of the simple approach our software took.<br />
<br />
= Hospital Competition =<br />
This chapter summarizes the approach for the hospital challenge and offers some reflection on the execution of the challenge. <br />
<br />
== Approach ==<br />
The general approach to the challenge is to create a point map of the map of the hospital. The figure below shows such a point map:<br />
<br />
[[File:Point_map_example.png]]<br />
<br />
A point is placed on different locations on the map. These locations are: at cabinets, on junction, in front of doorways and in rooms. In the placement of these points, it is important that each point can be approached from a different point in a straight line. The goal of these points is that the robot can navigate from one side of the hospital to the other by driving from point to point. The points that the robot can drive to in a straight line from a point are its neighboring points.<br />
<br />
The placement of each point is defined by the distance and direction to its neighboring points and its surrounding spatial features. When the robot is on a point (A) and wants to drive to a different point (B), the robot can use the distance and direction to from A to B to drive to where B approximately is. Then, using the spatial features surrounding point B, the robot can more accurately determine its location compared to B and drive to B. For the path between points, it can be defined whether this path is through a doorway or hallway, or whether its though a room. This can help in how the robot trajectory should be controlled during the driving from point to point.<br />
<br />
If the robots needs to drive from a startpoint to an endpoint which is not neighbouring, the software will create a route to that point. This route is a list of points to which the robot needs to drive to get to the endpoint. To make sure the route is as efficient as possible, an algorithm is used which calculates the shortest route. The algorithm that is used is called "Dijkstra'a algorithm". A similar algorithm is also used in car navigation systems to obtain the shortest route.<br />
<br />
== State Machine ==<br />
The figure below shows the state machine for this challenge. The state chart will be a part of the "World model block" form the system architecture. This diagram will be used as basis for the software written for the final challenge.<br />
<br />
[[File:State machine final.png|800px]]<br />
<br />
Per state, the functions which need to be performed are stated. These exclude functions, such as tracking the position of the robot on the map, which will always run in a separate thread. The state chart is designed such that all the requirements of the final challenge will be fulfilled.<br />
<br />
== Wall finding algorithm ==<br />
To allow PICO to navigate safely, he must know where he is in the world map and what is around him. PICO is equipped with a LIDAR scanner that scans the environment with the help of laser beams. This data is then processed to be able to determine where all walls and objects are. There are many ways in which you can process the data into useful information. A commonly used algorithm is the split and merge algorithm with the RANSAC algorithm as an extension. These methods are also used within this project. In the case of this design, we do the following processing steps:<br />
<br />
# Filtering measurement data<br />
# Recognizing and splitting global segments (recognizing multiple walls or objects)<br />
# Apply the split algorithm per segment<br />
## Determine end points of segment<br />
## Determine the linear line between these end points (by = ax + c)<br />
## For each data point between these end points, determine the distance perpendicular to the line (d = abs(a*x+b*y+c)/sqrt(a^2+b^2))<br />
## Compare the point with the longest distance with the distance limit value<br />
##* If our value falls below the limit value then there are no more segments (parts) in the global segment.<br />
##* If the value falls above the limit value, the segment is split at this point and steps 3.1 to 3.4 are performed again for the left and right parts of this point.<br />
# All segment points found are combined using the RANSAC algorithm.<br />
<br />
Below is a visual representation of the split principle. The original image is used from the EMC course of 2017 group 10 [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2017_Group_10#Corner_detection]:<br />
<br />
[[File:Split and merge resized.gif|center|alt=interface diagram group 10|Split and merge procedure.]]<br />
<br />
'''To be extended met een verhaal van Mike over zijn ransac functie!!!!'''<br />
<br />
A final line correction needs to be done because the RANSAC function gives only start and endpoints somewhere between the found vertices. The lines need to be fitted so that the corners and endpoints align to the real wall lines. This is simply done by determining the lines between the points and then equate the lines to each other. The final endpoints are determined by equating the point on the line where the found vertices are perpendicular to this line.<br />
<br />
== Path planning ==<br />
The method of determine the path points is done automatic and by hand. The program will load the Json map file when the program starts. The code will detect where all the cabinets are and what the front is of a cabinet. Each cabinet path point will exactly be placed in the middle of the virtual area that is specified in front of the cabinet. The rest of the path points are put in by hand. A path point has three variables: the x and y coordinates and the direction. The direction only applies when the path point is in front of a cabinet. The orientation that PICO needs to have to be in front of the cabinet is specified within the direction variable. The direction will be subtracted to the real orientation of PICO and afterward be corrected if PICO is not aligned right.<br />
<br />
[[File:JsonMapMetPathPoints.png|700px]]<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Cabinet positioning points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 0 (cabinet 0) || 0.4 || 3.2<br />
|-<br />
| 1 (cabinet 1) || 0.4 || 0.8<br />
|-<br />
| 2 (cabinet 2) || 0.4 || 5.6<br />
|-<br />
| 3 (cabinet 3) || 6.3 || 3.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path points'''<br />
|-<br />
! scope="col" | '''Point'''<br />
! scope="col" | '''X'''<br />
! scope="col" | '''Y'''<br />
|-<br />
| 4 (Start point) || 5.0 || 2.5<br />
|-<br />
| 5 || 5.5 || 3.2<br />
|-<br />
| 6 || 5.5 || 3.9<br />
|-<br />
| 7 || 5.5 || 5.6<br />
|-<br />
| 8 || 3.5 || 5.6<br />
|-<br />
| 9 || 2.0 || 5.6<br />
|-<br />
| 10 || 0.4 || 4.7<br />
|-<br />
| 11 || 1.25 || 4.7<br />
|-<br />
| 12 || 1.25 || 3.5<br />
|-<br />
| 13 || 0.4 || 2.7<br />
|-<br />
| 14 || 1.25 || 2.7<br />
|-<br />
| 15 || 1.25 || 1.5<br />
|-<br />
| 16 || 1.25 || 0.8<br />
|-<br />
| 17 || 2.0 || 1.6<br />
|-<br />
| 18 || 3.5 || 1.6<br />
|-<br />
| 19 || 3.5 || 3.6<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (1/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 4->5 || 0.86<br />
|-<br />
| 4->6 || 1.49<br />
|-<br />
| 5->3 || 0.8<br />
|-<br />
| 5->6 || 0.7<br />
|-<br />
| 3->6 || 1.06<br />
|-<br />
| 6->7 || 1.7<br />
|-<br />
| 7->8 || 2.0<br />
|-<br />
| 8->9 || 1.5<br />
|-<br />
| 9->2 || 1.6<br />
|-<br />
| 9->10 || 1.84<br />
|-<br />
| 9->11 || 1.17<br />
|-<br />
| 2->10 || 0.9<br />
|-<br />
| 10->11 || 0.85<br />
|-<br />
| 11->12 || 1.2<br />
|}<br />
<br />
{| class="TablePager" style="width: 230px; min-width: 240px; margin-left: 2em; float:left; color: black;"<br />
|+ '''Path lengths (2/2)'''<br />
|-<br />
! scope="col" | '''Path'''<br />
! scope="col" | '''Length'''<br />
|-<br />
| 12->13 || 1.17<br />
|-<br />
| 12->14 || 0.8<br />
|-<br />
| 13->0 || 0.5<br />
|-<br />
| 13->14 || 0.85<br />
|-<br />
| 14->15 || 1.2<br />
|-<br />
| 15->1 || 1.1<br />
|-<br />
| 15->16 || 0.7<br />
|-<br />
| 15->17 || 0.76<br />
|-<br />
| 1->16 || 0.85<br />
|-<br />
| 16->17 || 1.1<br />
|-<br />
| 17->18 || 1.5<br />
|-<br />
| 18->19 || 2.0<br />
|-<br />
| 19->8 || 2.0<br />
|}<br />
<br />
<div style="clear:both"></div><br />
<br><br />
<br />
= Appendices =<br />
This chapter contains some documents that are of minor importance to the project.<br />
<br />
== Week 2 - 1 May ==<br />
''Notes taken by Mike.''<br />
<br />
Every following meeting requires concrete goals in order for the process to make sense. An agenda is welcome, though it does not need to be as strict as the ones used in DBL projects. The main goal of this meeting is to get to know the expectations of the design document that needs to be handed in next monday, and which should be presented next wednesday. These and other milestones, as well as intermediate goals, are to be described in a week-based planning in this Wiki.<br />
<br />
=== Design document ===<br />
The focus of this document lies on the process of making the robot software succeed the escape room competition and the final competition. It requires a functional decomposition of both competitions. The design document should be written out in both the Wiki and a PDF document that is to be handed in on monday the 6th of May. This document is a mere snapshot of the actual design document, which grows and improves over time. That's what this Wiki is for. The rest of this section contains brainstormed ideas for each section of the design document.<br />
<br />
Requirements:<br />
* The entire software runs on one executable on the robot;<br />
* The robot is to autonomously drive itself out of the escape room;<br />
* The robot may not 'bump' into walls, where 'bumping' is judged by the tutors during the competition;<br />
* The robot has five minutes to get out of the escape room;<br />
* The robot may not stand still for more than 30 seconds.<br />
<br />
Functions:<br />
* Detecting walls;<br />
* Moving;<br />
* Processing the odometry data;<br />
* Following walls;<br />
* Detecting doorways (holes in the wall).<br />
<br />
Components:<br />
* The drivetrain;<br />
* The laser rangefinder.<br />
<br />
Specifications:<br />
* Dimensions of the footprint of the robot, which is the widest part of the robot;<br />
* Maximum speed: 0.5 m/s translation and 1.2 rad/s rotation.<br />
<br />
Interfaces:<br />
* Gitlab connection for pulling the latest software build;<br />
* Ethernet connection to hook the robot up to a notebook to perform the above.<br />
<br />
=== Measurement plan ===<br />
The first two time slots next tuesday have been reserved for us in order to get to know the robot. Everyone who is able to attend is expected to attend. In order for the time to be used efficiently, code is to be written to perform tests that follows from a measurement plan. This plan involves testing the limits of the laser rangefinder, such as the maximum distance that the laser can detect, as well as the field of view and the noise level of the data.<br />
<br />
=== Software design ===<br />
The overall thinking process of the robot software needs to be determined in a software design. This involves a state chart diagram that depicts the global functioning of the robot during the escape room competition. This can be tested with code using the simulator of the robot with a map that resembles the escape room layout.<br />
<br />
=== Tasks ===<br />
Collin and Mike: write the design document and make it available to the group members by saturday.<br />
<br />
Kevin and Job: write a test plan with test code for the experiment session next tuesday.<br />
<br />
Yves: draft an initial global software design and make a test map of the escape room for the simulation software.<br />
<br />
== Week 3 - 8 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 8th of May.<br />
<br />
=== Strategy ===<br />
A change was made to the strategy of the Escape Room Challenge. The new strategy is in two parts, a plan A and a plan B. First the robot will perform plan A. If this strategy fails, plan B will be performed. Plan A is to make the robot perform a laser scan, than rotate the robot 180 degrees and perform another scan. This gives the robot information about the entire room. From this information the software will be able to locate doorway and escape the room. This plan may not work, since the doorway may be too far away from the laser to detect it, or the software may not be able to detect the doorway. Therefore, plan B exists. This strategy is to drive the robot to the a wall of the room. Then the wall will be followed right hand side, until the robot crosses the finish.<br />
<br />
=== Presentation ===<br />
A Powerpoint presentation was prepared by Kevin for the lecture that afternoon. A few remarks on the presentation were:<br />
* Add the 'Concept system architecture', modyfied to have a larger font.<br />
* Add 'Communicating the state of the software' as a function<br />
* Keep the assignment explanation and explanation of the robot hardware short<br />
<br />
=== Concept system architecture ===<br />
The concept system architecture was made by Yves. The diagram should be checked on its english, since some sentences are unclear. A few changes were made to the spelling. The content of the contentremained mostly the same.<br />
<br />
=== Measuerment results ===<br />
The first test with the robot did not go smoothly. Connecting with the robot showed more difficult than expected. When the test program was run, it was discovered that the Laser Sensor contained a lot of noise. A test situation, like the escape room, was made and all the data from the robot was recorded and saved. From this data, a algorithem can be desiged to condition the sensor data. The data can also be used for the Spatial Feature Recognition.<br />
<br />
=== Tasks ===<br />
The task to be finished for next meeting:<br />
* Spatial Feature Recognition and Monitoring: Mike, Yves<br />
* Laser Range Finder data conditioning: Collin<br />
* Control: Job<br />
* Detailed software design for Escape Room Challenge: Kevin (Deadline: 9/5/2019)<br />
<br />
<br />
The next robot reservations are:<br />
* Tuesday 14/5/2019, from 10:45<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 15/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 4 - 15 May ==<br />
''Notes taken by Collin.''<br />
<br />
These are the notes from the group meeting on 15th of May.<br />
<br />
=== Escape Room Challenge ===<br />
The test of the software for the Escape Room Challenge was succesfull. Small changes have been made to the code regarding the currents state of the software being shown on the terminal. Also, the distance between the robot and the wall has been increased and the travel velocity of the robot has been decreased.<br />
A state machine has been made and put on the Wiki which describes the software.<br />
<br />
=== Wall detection ===<br />
A Split and Merge algorithm has been develped in Matlab. It can detect walls and corners. The algorithm needs to be further tested and developed. Futhermore, a algorithm needs to be developed to use the information from the split and merge to find the position of the robot on the map. The current plan is to use a Kalman-filter. This needs to be further developed.<br />
<br />
=== Drive Control ===<br />
The function to smoothly accelerate and decelerate the robot is not yet finished. Once the function has been swown to work in the simulation, it can be tested on the robot. This will be either Thursday 16th of May or the Tuesday after.<br />
<br />
In order to succed in the final challenge, better agreements and stricter deadlines need to ben made and followed by the group.<br />
<br />
=== Tasks ===<br />
*Yves: Filter double points from 'Merge and split' algoritm.<br />
*Mike: Develop the architecture for the C++ project.<br />
*Job: Code a function for the S-curve acceleration for x ,y direction and z rotation.<br />
*Kevin: Develop Kallman-filter to compare the data from 'Merge and split' with a map.<br />
*Collin: Develop a finite state machine for the final challenge<br />
<br />
The next robot reservations are:<br />
* Thursday 16/5/2019, from 14:30<br />
<br />
Next meeting: Wednesday 22/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 5 - 22 May ==<br />
''Notes taken by Kevin.''<br />
<br />
These are the notes from the group meeting on 22th of May.<br />
<br />
=== Finite State Machine and Path planning ===<br />
Collin created a Finite state machine of the hospital challenge. The FSM is a pretty complete picture of the hospital challenge, but a different ‘main’ FSM needs to be made in which the actions of the robot itself are shown in a clear manner. Collin also came up with a path planning method. In this method important point are selected on the given map, which will be connected with each other where this is possible. The robot will then be able to drive from point to point in a straight line. If some time is left, we could eventually improve to robot by letting it drive between point in a more smooth manner.<br />
<br />
=== Wall detection ===<br />
Yves has continued working on the split and merge algorithm. He has tried to implement his matlab implementation in C++, but this has proved to be more difficult than anticipated. He will continue working on this.<br />
<br />
=== Drive Control ===<br />
Job has continued working on the drive control, which is now almost finished. Some tests needs to be done on the real robot to see if it is functioning properly in real life as well. Furthermore, the velocity in the drive control needs to be limited, as it is still unbounded at this time. <br />
<br />
=== Architecture ===<br />
Mike has worked on creating the overall architecture of the robot. All the other contributions can then be placed in the correct position in this architecture. <br />
<br />
=== Spatial Awareness ===<br />
Kevin has worked on the kallman filter and spatial recognition. His idea is to first combine the state prediction and odometry information within a kallman filter to give a estimated position and orientation. This estimation can then be combined with laser range data to correct for any remaining mistakes in the estimation. <br />
<br />
=== Last Robot reservation ===<br />
During this reservation we were finally able to quickly set up the laptop and the robot for the first time without any issues. During this test we collected a lot of data by putting the robot in different positions in a constructed room, and saving all the laser range data in a rosbag file. Most of the data is static, with the robot standing still, but we also got some data in which the robot drives forward and backwards a little bit.<br />
<br />
=== Next robot reservations ===<br />
The next reservation is Thursday may 23. During this reservation we will have two hours to test the drive control made by Job. Particular attention will be given to static friction and the maximum possible acceleration of the robot. Furthermore, since we want to implement multiple threads in our program, we would like to know how much the robot can handle in real life. As such, a stress test will be made to see how much the robot can handle. The reservation for next week will be made on Wednesday may 29, on the 3th and 4th hour.<br />
<br />
=== Tasks ===<br />
*Job: finish drive control and integrate it in the architecture. Also create a main FSM with Collin.<br />
*Kevin: Work on a implementation of the kallman filter and spatial recognition software.<br />
*Collin: Continue working on path planning implementation. Also create a main FSM with Job.<br />
*Yves: Continue working on the C++ implementation of split and merge. Also look into speak functions of robot.<br />
*Mike: Work on collision detection and working on creating multiple threads.<br />
*Everyone: Read old wiki's of other groups to get some inspiration.<br />
<br />
<br />
Next meeting: Wednesday 29/5/2019, 13:30 in Atlas 5.213<br />
<br />
== Week 6 - 29 May ==<br />
''Notes taken by Job''<br />
<br />
These are the notes from the group meeting of the 29th of May.<br />
<br />
=== Progress ===<br />
There has been little integration of functions and everyone has kept working on their separate tasks. It is vital to write the state machine in code so the different functions can be implemented and tasks that still need to be completed can be found.<br />
<br />
Mike has worked on the potential field implementation and has achieved a working state for this function. This function needs to be expanded with a correction for the orientation of the robot.<br />
<br />
Yves has worked on the spatial recognition integration of the Ransac function. This needs to be finished so it can be used for the Kalmann filter Kevin has worked on.<br />
<br />
Kevin needs the work from Yves to finish the Kalmann filter and needs to add a rotation correction.<br />
<br />
Collin has worked on the shortest path algorithm which is ready to be used.<br />
<br />
Job has improved the Drivecontrol functions after last weeks test session and discussed the integration with the potential fields with Mike.<br />
<br />
=== Planning ===<br />
Since time is running short, hard deadlines have been set for the different tasks:<br />
<br />
*State machine (+ speech function integration) - 02-06-2019, 22.00 - Collin + Job (+ Mike)<br />
*Kalmann filter - 04-06-2019, 22.00 - Kevin + Yves<br />
*Presentation - 04-06-2019, 22.00 - Kevin<br />
*Driving - 05-06-2019, 22.00 - Mike + Job<br />
*Cabinet procedure - 02-06-2019, 22.00 - Collin + Job<br />
*Map + Nav-points - 05-06-2019, 22.00 - Yves<br />
*Visualisation OpenCV - Extra task, TBD<br />
<br />
=== Test on Wednesday 14.30 - 15.25 ===<br />
*Test spatial recognition<br />
<br />
=== Test on Thursday 13.30 - 15.25 ===<br />
*Driving + Map<br />
*Cabinet procedure<br />
*Total sequence<br />
<br />
== Week 7 - 6 June ==<br />
''Notes taken by Mike''<br />
=== Progress ===<br />
Kevin has been working on the presentation and the perception functions that fit the map on the detected features. Simple tests suggest that it works by manually feeding the functions with made up realistic points, as well as random points that need to be ignored by the function. For some reason the code does not execute repeatedly though. Either way, the code requires some fine-tuning. This function takes the estimated robot position (odometry) and the LRF data as inputs and has the corrected position as an output. It needs to be extended to take the previous navpoint as the origin of movement. Kevin expects this to be ready for testing by tomorrow's session.<br />
<br />
The presentation is almost done. The architecture slide needs to be simplified to prevent an overwhelming amount of information being visible on screen. The same goes for the state machine.<br />
<br />
Collin has integrated the state machine as much as possible. More public functions need to be made in the WorldModel object that allows the state machine to check whether the program can progress to any following state or not.<br />
<br />
Job and Mike were working on the DriveControl object. The current challenge is driving from point to point. This involves correcting the angle when it deviates from its straight trajectory as a result of the potential vector pushing it away. This is going to be tested in the testing session after this meeting. It also requires implementing the relative position of the end point of the current line trajectory, from Kevin's position estimation function.<br />
<br />
Yves (absent) has been working on implementing the published world map and supplying it with navpoints.<br />
<br />
=== Planning ===<br />
Thursday 6-6: appointment to work together in Gemini South OGO 0 from 9:00 to 10:45, then in Gemini South 4.23 until 12:30. The testing session is from 13:30 until 15:25. The plan is to attempt to integrate ''everything'' before this session to simply test as much as possible.<br />
<br />
Tuesday 11-6: appointment to work together from 8:45 until the testing session from 9:45 until 10:40. The entire code ''should'' be done by now. After the testing session, everything should be fine-tuned in the simulation environment.</div>S136625