Embedded Motion Control 2015 Group 3: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 180: Line 180:
===== Avoidance collision =====
===== Avoidance collision =====


To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy,  
To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.
 
 
 
 
to make it more robust multiple beams are taken
 
30 cm measures from pico its scanner, in reality this distance is signifacltly less.


=== Decision block ===
=== Decision block ===

Revision as of 16:49, 7 June 2015

This is the Wiki-page for EMC-group 3.

Group members

Name Student number Email
Max van Lith 0767328 m.m.g.v.lith@student.tue.nl
Shengling Shi 0925030 s.shi@student.tue.nl
Michèl Lammers 0824359 m.r.lammers@student.tue.nl
Jasper Verhoeven 0780966 j.w.h.verhoeven@student.tue.nl
Ricardo Shousha 0772504 r.shousha@student.tue.nl
Sjors Kamps 0793442 j.w.m.kamps@student.tue.nl
Stephan van Nispen 0764290 s.h.m.v.nispen@student.tue.nl
Luuk Zwaans 0743596 l.w.a.zwaans@student.tue.nl
Sander Hermanussen 0774293 s.j.hermanussen@student.tue.nl
Bart van Dongen 0777752 b.c.h.v.dongen@student.tue.nl

General information

This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.

The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.

PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:

  1. The laser data from the laser range finder
  2. The odometry data from the wheels

In the fourth week there is the "Corridor Competition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.

At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.

Design

In this section the general design of the project is discussed.

Requirements

The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:

  • Move and reach the exit of the maze.
  • The robot should avoid bumping into the walls.
  • So, it should perceive its surroundings.
  • The robot has to solve the maze in a 'smart' way.

Functions & Communication

The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills.

The task are the most high level proceedings the robot should be able to do. These are:

  • Determine situation
  • Decision making
  • Skill selection
(don't know for sure because in an old version was this:)
* Drive
* Turn
* Scan
* Wait

The skills are specific actions that accomplish a certain goal. The list of skills is as follows:

  • Drive
  • Rotate
  • Scan environment
  • Handle intersections
  • Handle dead ends
  • Discover doors
  • Mapping environment
  • Make decisions based on the map
  • Detect the end of the maze
* Drive to location
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions. 
* Check for doors
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.
* Locate obstacles
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.
* Map the environment
* Store the relative positions of all discovered object and doors and incorporate them into a map.
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.
Blockdiagram for connection between the contexts (update?)

(Specifications)

The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.

The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore. The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.

Structure

The problem is divided into different blocks. We have chosen to make these four blocks: Drive, Scan, Decision and Mapping. The following is an overall structure of the software:

Cohesion of Drive-, Scan-, Decision- and Mapping block

Calibration

Calibration part (if we want)

...

...

Software implementation

In this section, implementation of this software will be discussed based on the block division we made.

Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.

Drive block

Basically, this block is the doer (not the thinker) of the complete system. In our case, the robot moves around using potential field. How the potential field works in detail is shown in Scan. Potential field is an easy way to drive through corridors, and making turns.

Two other methods were also considered: Simple method and Path planning. However, the potential field was the most robust and easiest method.

Link to drive page

Schematic figure of virtual walls


The composition pattern of the drive block:

CP of Drive

Scan block

The block Scan processes the laser data of the Laser Range Finders.


By this, we want to detect the walls of course. This will be used to find corridors, doors and all kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.

  1. Scan directly gives information to 'drive'. Drive uses this to avoid collisions.
  2. The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.
  3. Mapping also uses data from scan to map the maze.
Potential field

By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.

In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker


Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.

The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the right corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.

Link to scan page


Constructing virtual walls
Avoidance collision

To create an extra layer of saftey avoidance collision has been added on top of the potential field. In general the potential field avoids collisions, however when constructing virtual walls fails the robot may crash into a wall and the turn of solving the maze is over. This avoidance collision is fairly easy, when the distance of multiple oextensive LRF beams is below a certain value PICO will move in the opposite direction. The usage of multiple beams is used to make this method more robust. The current parameter for activating avoidance collision is set at 30 centimeters measured from the its scanner, note that this valued is based on the dimensions of PICO.

Decision block

The decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.

Input:

  • Mapping model
  • Scan data

Output:

  • Specific drive action command

The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached. (ref)

Different situations when visiting a node
* If it is a dead-end node
* Did the door open for me
* Any unvisited paths
* Any paths with 1 visit
* Paths with 2 visit (not a choice)

Link to decision page

Mapping block

This block will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.

As is said in the previous paragraph, the Tremaux algorithm is used: [1].

Map&solve algorithm (update?)

The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.

The schedule looks like this:

  • Updating the map:
    • Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.
    • The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.
    • All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.
    • The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.
  • Choosing a new direction:
    • Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as Visited 2 times. If not, choose the edge where you came from
    • Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.
    • Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.
    • Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.
  • Translation from chosen edge to turn command:
    • The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.
  • The actual command is formulated
  • A set-up is made for the next node
    • e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.

Link to mapping page

Localisation

Not far enough yet

...

...

Integration

.... ....

Experiments

Seven experiments are done during the course. Here you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.

Files & presentations

  1. Initial design document (week 1): File:Init design.pdf
  2. First presentation (week 3): File:Group3 May6.pdf
  3. Second presentation (week 6): File:Group3 May27.pdf

Videos

Experiment 4: Testing the potential field on May 29, 2015.

Archive

This page contains alternative design that is not used in the end. To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.

EMC03 CST-wiki sub-pages