Embedded Motion Control 2016 Group 2

From Control Systems Technology Group
Jump to navigation Jump to search

Group Members

0779760 Ilias Aouaj
0818316 Florian Bastiaens
0755206 Yannick Knops

Goal

The goal of the maze competition project is to create software that lets the pico robot solve a maze and drive through it fully autonomous as fast as possible. In this maze there will be certain obstacles that the robot has to overcome during the solving of the maze, for example doors, open spaces, dead ends and loops in the maze.

Group progress

Week 1

During the first week all group members installed the required software. Furthermore a start was made with the tutorials.

Week 2

The start of week two was all about the design plan. Discussed were the requirements, what functions to use, the specifications and the components. A quick overview of the initial design can be found at the bottom of this page. Moreover in week two the tutorials were finished.

Week 3

Week three the actual coding started. Tasks were divided and a start was made on several aspects of the code.

Week4

In this week a major shift occurred in our group. 2 group members left, and the code that we initially worked on was too complex. We decided to start from scratch, working on the basic software of the robot. Completing the corridor challenge was our main priority. We decided to use potential only in the x-direction to keep the robot positioned between the walls and avoid collisions. We didn't expect any problems in y-direction, so the speed in y-direction remained constant. A corridor detection was implemented based on the LRF data left and right from the robot.

Week5

In this week we combined the potential field, that was used to avoid collisions and keep the robot between the corridor walls, with corridor detection to obtain 3 states for the robot. One where a corridor is recognized on the left side with translational motion set to zero and where it keeps rotating until it faces the direction of the detected corridor. The second state is almost exactly the same as the previous one, it differs as the recognized corridor is on the right side of the robot. The third state is where it moves through the corridor with a constant speed in the y-direction, an alternating speed in the x-direction based on the potential field and rotational speed set to zero. This was also the week of the corridor challenge, which we succeeded on the first try. The only point was that it moved to slow in the y-direction which is due to the unchanged parameter for the speed "Vy" we used during the tests. On the same day we had our intermediate presentation. Link to the presentation can be found at the bottom of this page under 'Files'.

Week 6

Week six the main focus was on potential field and wall detection. The potential field used in the corridor challenge was too simplistic to provide robust collision prevention during the maze challenge. Repulsive forces in the y direction were added and scaling was improved. Furthermore object detection was implemented. Using object detection a reference point was placed by the function 'planning'.

Week 7

The implementation of potential field together with object detection and planning caused problems. When turning, object detection returned unreliable objects. Since object detection was required for the placement of the reference point, we could not disable object detection during turning. Therefore a new way of placing the reference point was developed. In more detail this is discussed in 'limited range open-space detection' below. The new reference placement, together with the potential field, was tested and worked satisfactory on the real robot. Furthermore the final presentation was given, the pdf can be found under 'Files'. The second part of the week the focus was on mapping and planning. The plan was to create some sort of grid, representing the crossing, junctions and other object. Planning could then use this grid to decide where to place the reference point based on the Tremaux solving algorithm.

Week 8

Week eight was the week of the maze challenge. Mapping the maze was to complicated for us at this stage, therefore an alternative maze solving strategy was developed. For more detail see section 'maze solving strategy'. Moreover, dead-end detection was added. At the beginning of the week the software was tested. Results were positive except for side-door detection. Minor changes to the code were made before the actual maze challenge. The robot did not complete the maze and bumped during both trials into a wall. For a more detailed report on the maze challenge go to the section 'The maze challenge'.

Initial design

The PDF of the initial design can be found in the section 8 Files.

Requirements

Based on the restrictions of the maze competition and the conditions that have to be met, the following requirements are set:

  • The maze has to be solved by the robot autonomously within a given time frame
  • The robot has to avoid collisions with the walls and other obstacles
  • The robot has to recognize doors
  • The robot cannot be idle for a certain time

Functions

To keep a well structured overview of the functions, they are divided into two categories: the basic hardware functions and the skills. The basic functions are the lowest level and they represent the hardware. The skills are combinations of basic functions to complete more difficult tasks
The basic functions are:

  • Translation using omniwheel actuators
  • Rotation using omniwheel actuators
  • Scan using the Laser Range Finder (LRF)
  • Scan distance driven using odometry

These functions can then be combined to create the following skills:

  • Drive:
The robot moves using this skill. It uses the basic functions driving straight and turning. It also uses another skill, collision detection.
  • Collision detection/prevention:
The collision detection skill prevents the robot from hitting walls while navigating through the maze. It provides live feedback control from the LRF by detecting the distance to the walls and correcting the robots movement.
  • Junction recognition:
This skill is used to recognize junctions and corners in the maze. When the robot is moving through a corridor, the presence of a junction or corner can be determined from the data of the LRF. This data needs to be processed to make the robot aware of the junctions and corners.
  • Mapping:
This function maps the corners and junctions of the maze as nodes and corridors. The function is essential for preventing loops while solving the maze. This skill depends on the Junction recognition skill.
  • Localization:
This skill will locate the robot with respect to the maze. This skill will probably use a combination of the LRF and odometry data to determine a location in the map created with the mapping skill.
  • Recognizing doors:
As previously described, the robot should be able to open doors by beeping in front of them. Therefore the system has to recognize a door to prevent beeping at every obstacle and thus wasting time.

Specifications

Specifications related to the requirements are estimated in this section.

  • Two attempts are allowed within a total time of 7 minutes.
  • The jury decides if a robot bumps into a wall or obstacle. If so, the attempt is over.
  • The robot has to recognize doors. Using a sound signal, the door can be opened.
  • If the robot is idle for more then 30 seconds, the attempt is over

Components

For this project, the robot named pico will be used. The hardware that pico contains can be divided in the part below:

  • Actuator:
    • Holonomic base with omni-wheels
  • Sensors:
    • Laser Range Finder (LRF)
    • Wheel encoders(odometry)
  • Computer:
    • Intel i7
    • Running Ubuntu 14.04

Interfaces

The way the software works can be seen in the figure below.
This is the example of the care robot and it is to a large extent similar to this project.

software

The software can be divided in 5 contexts:

  • Challenge context:
The challenge context is the goal that needs to be accomplished
  • Skills context:
The skill context contains the skills that are needed accomplish the goal
  • Task context:
The task context contains decisions that have to be made. For example which turn to take.
  • Robot context:
The Robot context contains the sensor and actuators of the robot.
  • Environment context:
The environment context contains the data where the robot is with respect to the direct environment, for example the walls and doors, and with respect to the maze.

The corridor challenge

The strategy to complete the corridor challenge was straightforward. Drive straight whilst staying in the centre of the corridor and take the first exit.

Components

These components were required to use the strategy described above: <br\>

  • Potential field:
A simple but robust sideways potential field that keeps the robot in the center of the corridor.
  • Corridor recognition:
Corridor recognition based on jumps in laser range data as DEPICTED IN FIGURE X.X
  • A state switch
After recognizing the corridor and being within a certain range, a new state is activated. In this state the robot drives to the middle of the corridor. Then the robot turns and aligns itself as good as possible with the corridor based on odomotry data.

The result

Pico successfully completed the corridor challenge using the strategy described above.<br\> A note worth mentioning was that the default velocity was low. This was a safety decision made by the group since this was also the speed used for testing. Theoretically the default speed should not have influenced the robustness of Pico. Moreover, during the challenge, It became clear that velocity could have been significant higher. velocity.

Final implementation

Detection

Landmark detection

To make decisions of which corner to take and which route to follow, landmarks like T-junctions and crossings have to be detected.
In order to do this, each standard crossing has is given an landmark number.
When detection sees a landmark, it detects features of the landmark and assigns the detected landmarks number to a variable that is named object.
This variable is then read by the planning block. Based on the stored object number, the planning block can make a decision based on the object number.
Below are some explanations of how the landmarks are detected.

Dead end detection

When at least 95% of all LRF points is smaller than a predefined range, detection decides give variable object the number that corresponds with a dead end.
This results in the dead end protocol to be activated. In this protocol Pico stops 0.4m in front of the dead end wall, beeps, and waits 5 seconds. After these 5 seconds, it looks if a door is opened or if it still is an dead end.
If object number still shows dead end, Pico turns 180 degrees and continues to drive. If a door was opened, Pico will continue the new path.

This is presented in the videos below.

dead end with a door
dead end without a door

Side door detection

Because we use the limited LRF to place our reference points in open spaces, a side door is harder to detect. For these side doors a protocol was written to still detect these doors.<br\> When the robot drive through a corridor it keeps scanning a small set of points left and right for a sudden change in LRF range. When it detects a sudden change, the algorithm will check if the x values with reference to the robot lie within a certain tolerance. This was done respectively for the point after the sudden change and for the points before. This was done to ensure.

limited range open-space detection

To more robustly detect possible corridors, a variable limited LRF range was used. By limiting the LRF range, possible open spaces which can indicate corridors can be detected, which can be seen in the figure below.

reference point placement with 3 potential corridors
reference point placement with 2 potential corridors


The picture shows the limited LRF range of Pico before a crossing. The grey area is where all the limited LRF points are equal to the set maximum range. The orange area is when the LRF data detects a point within the limited-LRF data.
The open space detection looks at all the limited range data in a loop from right to left anti-clockwise and determines when a high number of consecutive points are equal to the limited range, this area is an open space, thus a potential corridor.
In the figure above this can be seen as the gray area. To prevent any small gaps to be detected, a minimum amount of conclusive points has to be detected in order to label it an potential corridor.

Each time the potential corridor is detected an x and y reference in the middle of the potential corridor at max range is stored. The first one detected is always saved as the first reference, the second one detected as the second reference etc, moving from right to left. This is also indicated with the numbers in the open-spaces in the figure above.
When no open space is detected, for example in a dead end, the reference point is placed directly in front of Pico. When driving in an actual open-space, The maximum range of the LRF data is increased so, it can again detect potiential corridors.
If a high number of points are still above the increased maximum range, the reference point is is placed directly in front of Pico.

Localization

Being aware of the global position of the robot is critical for mapping and also useful for the motion of the robot. The global position of the robot is formulated as, [math]\displaystyle{ \begin{bmatrix} x\\ y\\ \theta \end{bmatrix} }[/math]. The figure below shows a graphical representation of the global position.

Global.png

Discrete time model

With help of goniometric properties, the discrete time model of the global position of the robot is as follows,

[math]\displaystyle{ \begin{bmatrix} x_k\\ y_k\\ \theta_k \end{bmatrix} =\begin{bmatrix} x_{k-1}+V_x\Delta{t}\cos(\theta_k)-V_y\Delta{t}\sin(\theta_k) \\ y_{k-1}+V_x\Delta{t}\sin(\theta_k)+V_y\Delta{t}\cos(\theta_k) \\ \theta_{k-1}+\omega{\Delta{t}} \end{bmatrix} }[/math].

Where [math]\displaystyle{ V_x }[/math] is the sideward velocity, [math]\displaystyle{ V_y }[/math] the forward velocity, [math]\displaystyle{ \omega }[/math] the rotational speed and [math]\displaystyle{ \Delta{t} }[/math] the duration of each time step in the discrete model.

Extended Kalman filter

Since the odometry data is unreliable for a long run a Kalman filter is applied to minimize the error in the global position of the robot. The following is needed for a Kalman filter in context of the robot:

  • Prediction model of the global position.
  • Input data.
  • Odometry data.
  • Process noise and measurement noise.

The discrete time model of the global position that was mentioned before is being used as the prediction model for the global position. The input data are the forward speed, sideward speed and rotational speed calculated by the code and giving to io.sendBaseReference(). Combining the prediction model and the input data will give an estimation of the global position of the robot. To get a proper estimation of the global position, the measurement noise and the process noise need to be modelled correctly. The measurement noise can be based on sampling experimental data and deriving the statical properties. The covariance matrix [math]\displaystyle{ R }[/math] of the measurement noise can be calculated by,

[math]\displaystyle{ R_{ij} = \frac{1}{N-1}\sum_{k=1}^{N}(x_i-\bar{x}_i)(x_j-\bar{x}_j). }[/math] 

The covariance matrix [math]\displaystyle{ Q }[/math] which represents the process noise is not as straightforward as the covariance matrix for the measurement noise. It is being used to filter the errors in system model. Therefore the covariance matrix [math]\displaystyle{ Q }[/math] is being treated as a tuning parameter. The extended kalman filter follows the following scheme to estimate the global position of the robot.


Kalman.png

Mapping

The idea of mapping is to memorize certain points in the maze. This would be used to solve the maze, as the code would tell whether the robot had already visited certain locations in the maze. The initial idea was to save the positions of landmarks of maze, in this case corners of the corridors and points that are used to recognize conjunctions. This has proven to have some difficulties. This is why a decision has been made to map the maze in another method. This method is pretty straightforward. Every two seconds during the maze solving it registers the current position of the robot, see the left picture below for a grahpical representation where the red dots represent the registered positions. These positions are saved in a list which are used in the maze solving algorithm.

A Template:Pad A

Planning

The planned planning

The final planning

Since we couldnt get mapping/landmark detection to work robust, we chose for a simple decision making algorithm to solve the maze in the end. This algorithm was based on the odomery data. <br\>An adjustment to the odometry data was made to make the data continous, not with jumps. The planning block will have at most three potential reference points to chose from, left, right or straight. Initially Pico will always pick the first reference point, which will be the most right option. When Pico has turned a few times to the right, the planning block picks the second reference point when available.<br\> When Pico continues to turn to the right, planning will always pick the last reference point.

Loopsolve.png

The maze challenge

The maze itself can be seen in the figure below.
Maze design final challenge.png

The result

Unfortunatly we didnt pass the Maze challenge itself. During the first attempt the robot recognized doors everywhere and the robot eventually collided sideways in a wall. During the second attempt the robot turned right, took another right and then continued the corridor untill it passed the door on the left and turned right again. Then Pico drove close to a wall, stopped and then bumped into it again.

Evaluation

During both attempts, the planning block interfered with the driving. The planning block chooses a reference point out of all possible choices based on how many rotations the robot has made. <br\> To estimate the number of rotations Pico has made, the odometry data was used. When Pico made more than 1.5 rotation to the right. <br\> Unfortunatly the switching criterium between picking the right or left reference was determined mid-cornering due to the odometry data already having an initial state at the start.
This caused the Robot to simultaniously switch between picking the first and second reference, which lead to the robot driving to the wall between the reference points. On top of that, the potential field wasnt scaled aggressively enough to prevent a side ways collision.

Fixing the problem

To fix the problem described above, the potential field was scaled a bit more agressively. Furthermore, reference choice was made more robust by introducing a state. <br\> If Pico made more than 1.5 rotations to the right, a state was introduced that keeps picking the second reference point until the netto rotations of pico is zero again. <br\> When Pico makes more than 2.5 rotation to the right, another state was introduced to keep picking the last reference point until the rotations decrease to 1.5. After falling below 1.5 rotations, the second reference point is picked again. Below is a video where Pico passes the Maze Challenge in the simulation.

Maze challenge.png


Files