Embedded Motion Control 2018 Group 6

From Control Systems Technology Group
Jump to navigation Jump to search

Group members

Name: Report name: Student id:
Thomas Bosman T.O.S.J. Bosman 1280554
Raaf Bartelds R. Bartelds add number
Josja Geijsberts J. Geijsberts 0896965
Rokesh Gajapathy R. Gajapathy 1036818
Tim Albu T. Albu 19992109
Marzieh Farahani Marzieh Farahani Tutor

Introduction

Gostai-Jazz-500x500.jpg

The course "Embedded Motion Control" is about designing a software able to operate on an autonomous robot. It teaches the students the basics of designing software with the aim of reactive real-time autonomous task performing. The goal was to design a software to be implemented in the robot PICO to make him perform tasks by himself. Those tasks include driving out a room, creating a map of the environment, park against the wall and finding an object that was previously absent.

This wiki page details the design of Group 6. In Section 1 the requirements, specifications and initial design are explained. In Section 2 an insight is given about the architecture of the code, the mapping algorithm and the functions developed for the course. Next, Section 3 the approach to each challenge is explained and evaluated based on the results. Finally Section 5 will resume the work and give some explanation about the lessons learned.

Initial Design

Link to important pdf

The report for the initial design can be found here.
The first presentation can be found here.
The final presentation can be found here.

Requirements and Specifications

Use cases for Escape Room

1. Wall and Door Detection

2. Move with a certain profile

3. Navigate


Use cases for Hospital Room

1. Mapping

2. Move with a certain profile

3. Orient itself

4. Navigate

Requirements and specification list

In the table below the requirements for the system and their specification as well as a validation are enumarated.

Table1.JPG

Functions, Components and Interfaces

The software that will be deployed on PICO can be categorized in four different components: perception, monitoring, plan and control. They exchange information through the world model, which stores all the data. The software will have just one thread and will give control in turns to each component, in a loop: first perception, then monitoring, plan, and control. Adding multitasking in order to improve performance might be applied in a later stage of the project. Below, the functions of the four components are described. What these components will do is described for both the Escape Room Challenge (ERC) and the Hospital Challenge (HC).

In the given PICO robot there are two sensors: a laser range finder (LRF) and an odometer. The function of the LRF is to provide the detailed information of the environment through the beam of laser. The LRF specifications are shown in the table bellow,

Specification Values Units
Detectable distance 0.01 to 10 meters [m]
Scanning angle -2 to 2 radians [rad]
Angular resolution 0.004004 radians [rad]
Scanning time 33 milliseconds [ms]

At each scanning angle point a distance is measured with reference from the PICO. Hence an array of distances for an array of scanning angle points is obtained at each time instance with respect to the PICO.

The three encoders provides the odometry data (i.e) position of the PICO in x, y and &theta directions at each time instance. The LRF and Odometry observers' data plays a crucial role in mapping the environment. The mapped environment is preprocessed by two major blocks Perception and Monitoring and given to the World Model. The control approach to achieve the challenge is through Feedforward, since the observers provide the necessary information about the environment so that the PICO can react accordingly.

Interfaces and I/O relations

The figure below summarizes the different components and their input/output relations.

Architecture group6 1.png

Overview of the interface of the software structure:

The diagram below provides a graphical overview of what the statemachine will look like. Not shown in the diagram is the case when the events Wall was hit and Stop occur. The occurence of these events will be checked in each state, and in the case they happened, the state machine will navigate to the state STOP. The state machine is likely to be more complex, since some states will comprise a sub-statemachine.

Interface PICO.png

Code structure and components

Code Architecture

Gmapping

Figure X: Overview of the transitions required for gmapping

Making a map is crucial for the second challenge. An easy way to detect the object placed in the hospital is by first determining the initial state of the empty environment. Then during the searching phase compare the current state to the initial one and determine the location of the object. For this, the initial state of the environment (i.e. the hospital corridor and the rooms) has to be stored in some way. This can be done in various ways, one of which is mapping. By initially mapping the environment, all changes can be detected by comparing the current state of the environment to the initial mapped environment. Mapping itself can also be done in many ways. Another important element is the location of PICO during the entire challenge. The software needs to know the current location of the robot to determine its relative position to walls, the object or the initial position and the tasks it should perform like park or drive to the object.

There are a lot of different ways to solve these two problems. No perfect solution exists, the existing solutions are adapted to the scenario and the available sensor data. In the robotics field, an often used approach tackles the localization and the mapping together in the so-called computational problem SLAM (Simultaneous localization and mapping).SLAM maps the environment and simultaneously improves the accuracy of the current position based on previously mapped features. There are a lot of ways to solve this computational problem [1]. The choice depends on the available resources. In the case of PICO there are three types of sensor data: control effort, odometry data, and Laser rangefinder data. The choice was made to use the gmapping algorithm to solve the SLAM problem. There are a lot of SLAM algorithms available, one of which is a ROS package called Gmapping. This Gmapping package uses a ROS node under the name of slam_gmapping that provides laser-based SLAM [2].


The library used by this ROS package is the library from OpenSlam [3]. The herein developed approach uses a particle filter in which each particle carries an individual map of the environment. Since the robot moves around, these particles get updated information about the environment to provide a very accurate estimate of the robot's location and thereby the provided map.

Gmapping requires three transitions between coordinate frames to be present, one between the map and the odometry, one between the odometry and baselink and one between baselink and the laser. Gmapping provides the first transition but the remaining two have to be developed. The second transition has to be made by a transform broadcaster that broadcasts the relevant information published by the odometry. The final link is merely a static transform between the baselink and the laser and can thereby be published via a static transform publisher. These transforms are made using TF2 [4]. An overview on the required transformations can be requested using the view_frames command of the TF2 package, which is shown in Figure X.

Gmapping definitely has various limitations, such as:

  • Limited map size
  • Computation time
  • Environment has to stay constant
  • Small objects can't be detected as they are filtered out by the averaging

Result:

Getting gmapping to work within the emc framework took quite some effort, but as seen below: it works! The map created by gmapping is saved as a matrix in the world model. Each value in the matrix can take on 3 different values: 100 for known occupied space, in other words the walls. 0 for known empty space and -1 for unknown space, space that can't be reached by the laser. Saving the map in this format is very convenient.

Visualization of gmapping when running the wall follower.
Figure X2: The original map compared to the map created by gmapping


With the map saved as a matrix in the world model several tasks can then easily be performed:

Map Analysis

First design

After a geometric map has been made, theoretically a semantic map (A tree structure with a corridor, doors and rooms) could be made. The first approach was to make such a map: OpenCV aligns the map (minAreaRect and warpAffine), splits the map in different sections (distanceTransform and watershed) and fits rooms (minAreaRect) on every section. If the semantic map could be made, the room for which the robot has to pass to highest number of doors is easily found. First results were promising, as can be seen in Figure X3. However, the used method was very sensitive to changes in corridor form and size and it could not cope with doors at the corners of a room. Therefore, the high level hint was neglected, and the only focus was to find the new object.

Figure X3: Failed approach to detect rooms, doors and the corridor

Final implementation

Without using the high level hint, the object can be found by comparing current and original maps. After the first map is completed, it is stored in a OpenCV Mat object. The robot starts to follow the wall again and gmapping continues to map the environment. Every time gmapping has updated, mapCompare compares the new map with the old map. To overcome minor differences due to noise in the maps, the black walls are made a little bit thicker (erode()). The original map is subtracted by the new map (subtract()). The noise of scanning artefacts are removed with another erode(), which acts in the opposite direction, as the seen objects are now white. If there are white pixels present in the resulting image, an object has been found. OpenCV locates and marks the object and stores a picture with the object. The erode and subtract steps are visualised in Figure X4.

Figure X4: Animation of the steps OpenCV makes to detect objects

Problems

Gmapping sees new objects as scan-artefacts and only accepts them as objects after multiple scans, therefore it takes some time before PICO recognises the object. Also, the erode-subtract-erode method is not very robust, only a little shift and noise-difference between the maps is allowed, making it unusable for large floorplans. The coordinates of the object in the picture is not translated to gmapping coordinates, and as there is no navigation function, the robot cannot drive to the found object yet.

Available functions

Wall follower

Working principle wall follower

The wall follower is the basic driving algorithm used for the two challenges. It was used on it's own to escape the room and it served as driving algorithm during the hospital challenge.

A good wall follower has a few characteristics:
- It does not get stuck
- It can be started at any location
- It doesn't hit the wall

Working of the algorithm:

Upon starting, PICO drives in a straight line towards the first wall. When the distance to this wall becomes smaller than 0.45 m it stops and alignes itself with the wall by checking a

Localization and orientation

gMapping solves the Simultaneous localization and mapping algorithm. It publishes information about the current position compared to the initial position. We want this position to be stored in the world model as a lot of choices depending on the position of the robot. The position can be obtained by listening to the transformation from the frame map, the fixed world frame created by gMapping to the frame "base_link" which is situated at the base of the robot. From this transformation, we want to know three things: the (x,y) compared to the initial position of the robot and the orientation of the robot compared to its initial orientation. This is achieved by creating a listener that listens to the broadcasted transform from the map to the "base_link" frame and saves the position and the orientation as doubles in the world model.

Parking

Parking algorithm in the simulation.

One of the function to be performed by the PICO during the hospital challenge is to park the robot after mapping and reaching the initial position. The PICO has to be parked backwards touching the wall. Once parked the PICO should say "I am Parked!". This function is implemented using Task software architecture.

World Model:
The world model object consist of the monitoring, plan, perception and control states.
Monitoring:

  • mo_obstacle_parking (Found an obstacle while parking)
  • mo_parking (When the PICO reach the initial position and during the complete parking plan)
  • mo_parked (When the PICO has completed parking)


Plan:

  • pl_initial_position_reached (After mapping the initial position reached)
  • pl_moveforward_parking (PICO moves forward)
  • pl_standstill_parking (PICO stops once the obstacle is found)
  • pl_rotate_parking (PICO rotates 180 degrees)
  • pl_movebackward_parking (PICO moves backward)
  • pl_stop_parking (PICO stops once it touches the wall)


Control:

  • If initial position is reached the PICO will rotate 180 degree.
  • After rotation will move backward.
  • If the control effort is more than a certain value stop.
  • Finally, IO.speak "I am Parked!".

Room and door recognition

Finding the Door, Driving Through It

The room escape challenge and the hospital challenge have a set of basic skills in common. Among them are detecting a door, and driving through it.In the following we describe a solution we considered.


Requirements:

R1. The robot shall find the position of the door.

R2. The robot shall drive through the door.

R3. At all time, the robot shall not touch the walls.


Assumption:

A1. For the sake of simplicity, this solution makes the assumption that the robot can see all the walls around it -- that is, at any position of the robot inside the room, all the walls are within scanning distance.

A2. The robot is inside the room, at a distance from the walls that allows it to rotate without touching them -- considering the small radial movements that occur during a rotation in practice.


Design Decisions:

DD1. Based on A2, the robot shall find new information by scanning 360 degrees.

DD2. Based on A2, it can however happen that from the initial position the robot won't be able to find the door, let alone drive through it -- for instance if the initial position is close to a corner and the door is located at one of the adjacent walls. Therefore, we split our algorithm into several parts: find a position from where the robot can surely see the door, navigate there, and from there scan again to find the door. After that, drive through it.

DD3. Based on DD3, we want the robot to first find the "middle" of the room. We calculate it as the center of mass of the scanned points.

DD4. Our algorithm will perform the following:

1. Scan 360 degrees.

2. Calculate the center of mass for the scanned points, and navigate to it.

3. Scan again 360 degrees.

4. Calculate the position of the door.

5. Calculate the position of the middle of the door.

6. Drive though the middle of the door.


Remark: The best solution for step 4 is to indentify the door by finding the two discontinuities at the two margins of the door. In our solution, for the time being, we have however assumed that scanning the door will give very small values (since the distance is very large, and then the robot gives values that should be considered wrong). This solution works for the room escape challenge.

Remark: Step 6 can go wrong if the direction of driving through the middle of the door is not orthogonal to the door -- then the robot might drive through the door at a small angle and might touch the doorposts, or a wall very close to them. Therefore, after step 5, we should have:

6'. Find the point 40cm in front of the middle of the door, and navigate to it.

7'. Drive from there through the middle of the door.


For reasons related to lack of time, we implemented the solution with steps 1 - 6.


Result: In the following video we can see a test-case scenario.

video file link: https://drive.google.com/open?id=1Lwc4l8tbk_NFR4zeFK-r6-r6EjG1Ca2d

State machine and execution of the code

Challenges approach and evaluation

Escape Room Challenge

Approach

A simple code was used for the escape room challenge. It is a code that follows a wall. The idea is as follows: Once started PICO first drives in a straight line, stopping 30 cm from the first encountered wall. Once a wall is detected PICO aligns it's left side with the wall and starts following the wall. Once PICO is following the wall it is testing in each loop if it can turn left. If an obstacle is detected, it checks if it can drive forward. If it can't drive forward, it will start turning right until it is aligned with the wall in front of it. Then it starts the loop again. This application is very simple but it is robust. As long as PICO is aligned with the wall to it's left and nothing is in front of it, it will drive forward. When it reaches a corner it will turn and follow the other wall. If it drives past a door it will detect empty space on it's left and turn. Due to the simple geometry of the room, the code should be sufficient for the simple task of getting out.

In the following section, more details are given about the different components of the code (perception, modeling, plan, control, world model, main)

The world model is a class with all the information stored inside. The other components, perception, modeling, plan and control function which are called at a frequency of 20 Hz by the main loop.

- World model: For this simple task, the world model is relatively simple. It contains serves as a communication interface for the other functions. each function stores data in the WM and extracts the information it requires.

- Perception: In this function, the data from the laser rangefinder is treated. First, a simple filter is applied to the data to filter are the fault data points where the laser beams hit PICO. This is done by filtering out the distances smaller than 10 cm. Then the index of each data point and the angle increment are used to calculate the angle at which the measurement was taken. Then the angle and the distance are used to convert the measurement into cartesian coordinates.

- Monitoring: In monitoring selective parts of the data are used to determine if there are obstacles to the left or front of the robot.

- Plan: For this challenge, the plan function does not add a lot of functionality. It relays the information of the monitoring. It was implemented because it is going to be used for the more complicated challenge and would simplify implementation.

- Control: In the control function, speed requirements are sent to the actuators of PICO depending on the positions of the obstacles.

The script was simple and included some elements that were not necessary but would be required for the next challenge. They were included to ease implementation at a later stage.

Advantages and disadvantages

The main advantages of the solution explained in the previous paragraph is the simplicity and the robustness.

Result

Merely two teams made it out of the room, one of which was us! We also had the fastest time so we won the Escape the Room Challenge!

The winning attempt at the Escape the Room Challenge.

Hospital challenge

Approach

The hospital challenge had three distinct tasks that were to be performed for the challenge to be completed. First, the entire environment was to be mapped. Once back at the initial position PICO should park backward against the wall and then finally it had to find the object and drive to it.

The used approach in the challenge was a simpler version of the planned one due to time constraints. The planned approach is explained first. Upon starting PICO drives towards the wall on its left and the wall follower is used to follow the wall and map the entire environment. In parallel, the (x,y) extracted from gMapping are used to determine the straight line distance to the origin. Once this distance has exceeded 1m the algorithm starts comparing the distance with a reference value of 0.4 m. This distances is no magic number, it was derived based on the information that the maximal width of the corridor was 1.5m and PICO is approximately 0.4m wide. When the distance to the origin becomes smaller than the reference value PICO will be close to its starting point and the parking algorithm can be started. Always hugging the left wall is enough, for this challenge, to satisfy that the entire environment is mapped. Once the initial position is reached, PICO turns 90 degrees based on the orientation from gMapping and than drives in a straight line backward, checking the control effort to determine when the wall is reached. When parking is done the map is saved as a big matrix. This matrix is used in combination with the high-level hint to determine the location of the door to the room where the object is located. A path planning algorithm based on Dijkstra would determine a path to the location of the door. Using a path following algorithm PICO would drive to the room entrance. Once the entrance reached it would map the room a second time and use OpenCV to determine the location of the object. Finally, the combination of the path planning and path following algorithm would be used to drive towards the object.


The final approach is a lot simpler and will be detailed next. Up to the parking, the approach is the same. Another approach is used to located and drive towards the object. Upon parking, a version of the map at that time is saved as an independent file. Once PICO has parked it starts following the left wall again. But this time in parallel it compares the map saved map with the version that the mapping algorithm has updated. With the use of OpenCV the most probable location of the object is determined. Once the location of the object determine PICO keeps following the wall until one of its coordinates (x,y) is the same as the corresponding one of the object. Because the wall is always on PICO's left, when one of the coordinates of PICO is the same as the one of the object, the object will always be on PICO's right. Therefore PICO simply drives right and compares its current location with the object's location, when it comes within 0.3 m of the object it stops and says that it has found the object.

Advantages and Disadvantages

The main advantages of the solution implemented in PICO are the robustness and simplicity. The wall follower that was written for the escpape room is very robust. Under no condition present in the challenge would it get stuck or hit a wall. This served as a solid foundation for the rest of the approach. It would not be the fastest but it would be robust and work under all conditions.


The chosen approach has a few disadvantages. It doesn't take into account the high level hint. This would save a lot of time in the finding of the object. Futhermore no path planning algorithm is available. Such an algorithm would increase the

Result and Evaluation

The hospital challenge didn't entirely go as expected. We were not able to complete the entire challenge. Even though we ended second! The mapping part of the challenge went very well. We were able to map the entire environment without hitting a wall and display the map using rviz. But upon returning to the begin position the parking algorithm did not switch on and PICO just kept following the wall. The wall follower worked extremely well, never was a wall hit nor did it get stuck. It had a little trouble with crossing small corridors because of the security distances implemented to avoid collisions. The only consequence was that it was slow to drive through corridors. This could have been solved by either reducing the safety distances or by making the distance detection in a smarter way. Furthermore, gmapping worked very well, the map was consistent with the hospital.

Several problems were encountered. First, the initial position used by gmapping is determined based on the odometry data upon executing the algorithm. For the challenge, PICO was not reset in between test and therefore the initial position was wrongly determined outside of the hospital. This was the reason the parking algorithm did not turn on. The fix to this issue is adding in the shell script a line that restarts pico and resets it's odometry data. After the challenge, another test was performed where PICo was reset before the start. During that trial, the transition to parking worked but the control effort data was not tuned properly and PICO did not stop upon hitting the wall. During a post-challenge test with reset odometry data, the parking algorithm did indeed Furthermore, the parking algorithm used the control effort data but the limit set not tuned correctly. An additional hour of testing would have been enough for us to determine the required value and change it in the main code.

Another problem which was not encountered but would have would parking have been fine was the transition to driving to the object. The final integration of the code was done in a rush. Upon finding the object PICO would have said that it had found it but kept on following the wall without making the transition to drive toward the object.


List of possible improvements:
- Include odometry data reset in the shell script.
- Instead of mapping with a wall follower, use data from gmapping to map until all the known space is enclosed.
- Treat matrix data in a way to determine doors and rooms from the map matrix.
- Use high-level hint to determine the room where the object is.

Conclusion

Code

Old text

Worldmodel The world model contains the prior knowledge that is available as well as information about the surroundings of PICO. It is in the world model that the perception function stores the treated data from the laser rangefinder. Furthermore, it contains a big matrix representing the map where PICO is. This matrix is updated at each time step. Initially, all the values are unknown (represented by -1), then the values are replaced by 100 for known occupied space and 0 for known free space. This method allows for easy path planning by means of Dijkstra or A*. This matrix data can be used to create semantics for the map. From the matrix, doors, rooms, and the corridor can be classified by clever treatment of the data. A drawback of this matrix created by gmapping is that prior knowledge is not implemented. For example, sometimes the walls measured by mapping are not entirely straight or the corners are not exactly orthogonal. This makes treatment and recognition from the data harder.

Perception: In perception, the sensor data is processed. First, the data that measure distances smaller than 10 cm are filtered out, these are instances where the laser finder sees PICO itself. Then the distances and the angles are used to turn each data into cartesian coordinates. Lastly, 3 points on the left of the robot are used to check for alignment with the wall.

Monitoring Map complete Location driving: In this part of the code a



Planning Map until known space is enclosed drive to start bump into wall at start map again+ compare to the previous map drive to object


Control

follow wall better follow trajectory drive backwards