Embedded Motion Control 2013 Group 10: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
Line 33: Line 33:
</table>
</table>


= Corridor Challenge =
== Positioning ==
[[File:Center_Corridor.png|right|100px]]
To remain in the center of the corridor and avoid hitting the wall the position is continuously checked and controlled. This is visualized in the picture below. The black area is the wall, the red area is close to the wall and should be avoided and the green area is the area where it is safe to drive. The robot drives forwards until it drives into the red area. Whether the robot is inside the red area is determined directly from the laserdata. If it is in the area it makes a slight turn (about 5 degrees) into direction of the green area and then again drives in forward direction.


== Wall Avoidance ==
=Introduction=
[[File:Wall_avoidance.png|right|100px]]
Wall avoidance has the highest priority at all times and is therefore active a all times. Wall avoidance measures distance in all possible directions. Once one distance becomes too small the “wall avoidance node” overrules all other nodes and commands the robot to stop. The threshold for the distance is not just a scalar. Since the robot has some velocity in forward direction, more space is required in forward direction of the robot than sideways. Therefore the threshold has an elliptic profile as shown in the figure.


== Noise reduction ==
[[File:jazz.jpg|left|bottom|200px]]
To reduce noise a Gaussian filter is added.
<math>G(x)=\frac{1}{\sqrt{\sigma \pi}}e^{\frac{-x^2}{2 \sigma^2}}</math>


This Gaussian function is used to make a 9x1 Kernel. So that every point is the weighted averaged of 9 measurements.
Nowadays, many human tasks have been taken over by robots. Robot motion control and embedded motion control in the future allow us to use robots for many purposes, such as health care. The (humanoid) robots therefore have to be modeled such that they can adapt to any sudden situation.  
This results in the following kernel for <math>\sigma=2</math>:


g = [0.0276 0.0663 0.1238 0.1802 0.2042 0.1802 0.1238 0.0663 0.0276]
The goal of this project is to get the real-time concepts in embedded software design operational. This wiki page reviews the design choices that have been made for programming Jazz robot Pico. Pico is programmed to navigate fully autonomously through a maze, without any user intervention.  


So that the sum(g) = 1.
The wiki contains three different programs. The first program was used for the corridor competition. In this program the basic skills like avoiding collision with the walls, finding corners and turning are implemented.  


== Corner detection ==
After the corridor competition, a new design strategy was adopted. The second program consists of a low level code, namely a wall follower. By keeping the right wall at a fixed distance from Pico, a fairly simple, but robust code will guide Pico through the maze.
If there is a corner there is a sudden bump in the obtained laserdata. The location of this bump can be determined with the gradient of the laserdata. Since we still want to filter the data a kernel is used based upon the gradient of the Gaussian.  
<math>\nabla G(x)=\frac{-x}{\sigma^2}\frac{1}{\sqrt{2 \pi}}e^{\frac{-x^2}{2 \sigma^2}}</math>


Resulting in the following kernel for <math>\sigma=2</math>:
Besides this wall follower strategy, a high level approach was used. This maze solving program uses wall (line) detection to build a navigation map, which Pico uses to create and follow path lines to solve the maze. To update the position of Pico, the odometry information, local and global maps are used.


dg=[0.1353 0.2435 0.3033 0.2206 0 -0.2206 -0.3033 -0.2435 -0.1353]
The latter two programs use Pico’s camera to detect arrows in the maze that point Pico in the right direction.


The way corners are detected can be visualized using some fictitious laserdata. We see some laserdata with both white noise as well as salt and pepper noise. The Gaussian filters the noise reasonably. The size of the kernel should be tuned. A larger kernel results in less noise, but results in more loss of data. The green line shows the gradient of the laser data. If a proper threshold is chosen this can be used to determine a bump in distance and thus a corner.
=Data processing=
[[File:Cornerdetection.png|600px]]


== Corner movement ==
Pico outputs a lot of sensordata. Most of the data needs to be preprocessed for use the maze-solving algorithm. The odometry data, laserscandata and imagedata are discussed below.
[[File:Corner_detection.png|right|bot|200px]]
If there is a sideway, two corners are detected. The distance and corresponding angle towards each of those corners are sent towards the “go-around-corner-node”. Once theta1 is about + or – 90 degrees this nodes takes the following steps:
* Drive forward until L1=L2
* Turn towards corner until theta1 = theta 2
* Drive forward until theta1 > 90 degrees and theta2 < 90 degrees.
While performing these steps “wall-avoidance” keeps running.
In the next stadium of programming this node should become more robust and should constantly update and control the position.


= Maze =
==Odometry data==
== Mapping ==
One of the incoming data types is the odometry. The odometry information is based on the angular positions of the robot wheels. These angular positions are converted into a position based on a Cartesian odometry system, which gives the position and orientation of Pico, relative to its starting point. For the navigation software of Pico, only the x,y-position and <math>\theta</math>-orientation are of interest.  
[[File:Mapping.png|left|300px]]
There are two different information sources used for mapping and path determination. First of all the laserdata, which is first filtered by a Gaussian filter as explained above. Next it is used to determine possible edges. The tool used for this edge detection will probably be the Hough transform. By fitting edges of the highest probability through the laserdata a map is generated.  


The second source of information is the odometry. Merely the odometry is not a sufficient tool to determine the position, because of deviations in sensordata due to effects like slip. However the odometry is a fairly easy to use tool and therefore we only use the odometry to determine an estimate of the position. This estimate is used as an initial guess to determine the “real” position, based on the laser data. This initial guess is used to fit the laserdata on the old map, which can be used to determine the real position.  
Due to slip of the wheels, the odometry information is never fully accurate. Therefore the odometry is only used as initial guess in the navigating map-based software. For accurate localization, it always needs to be corrected.  


Once there the position estimation and mapping are completed, a maze solving algorithm will be written. It should search for open and closed contours, which are respectively accessible and not accessible. One of the current ideas is to place “ficititious walls” to cover an area with a closed contour, so that an area is treated as not accessible. This will finally result in a one way path from entrance to exit.
This correction is based on the deviation vector obtained from the map updater. This deviation vector gives the (x,y,θ)-shift that is used to fit the local map onto the global map and therefore represent the error in the odometry. Because of the necessary amount of communication between the parts of the software that create the global map and the part that keeps track of the accurate position, these parts are programmed together in one node. This node essentially runs a custom SLAM (Simultaneous Localization and Mapping) algorithm.
 
==Laserscan data==
 
==Arrow detection==
 
=Program 1: Corridor Challenge solver=
 
==Wall avoidance==
 
==Edge detection==
 
==Centering algorithm==
 
==Program structure==
 
==Program evaluation==
 
=Program 2: Wall Follower=
 
==Wall avoidance==
 
==Dead end detection==
 
==Strategy controller==
 
==Program stucture==
 
==Program evaluation==
 
=Program 3: Map based strategy=
 
==Map builder==
 
==Odometry==
 
==Map updater==
 
==Path creator==
 
==Line follower==
 
==Program Structure==
 
==Program evaluation==

Revision as of 16:11, 27 October 2013

Group Name

Team Picobello

Group Members

Name: Student id: Email:
Pepijn Smits 0651350 p.smits.1@student.tue.nl
Sanne Janssen 0657513 s.e.m.janssen@student.tue.nl
Rokus Ottervanger 0650019 r.a.ottervanger@student.tue.nl
Tim Korssen 0649843 t.korssen@student.tue.nl


Introduction

Jazz.jpg

Nowadays, many human tasks have been taken over by robots. Robot motion control and embedded motion control in the future allow us to use robots for many purposes, such as health care. The (humanoid) robots therefore have to be modeled such that they can adapt to any sudden situation.

The goal of this project is to get the real-time concepts in embedded software design operational. This wiki page reviews the design choices that have been made for programming Jazz robot Pico. Pico is programmed to navigate fully autonomously through a maze, without any user intervention.

The wiki contains three different programs. The first program was used for the corridor competition. In this program the basic skills like avoiding collision with the walls, finding corners and turning are implemented.

After the corridor competition, a new design strategy was adopted. The second program consists of a low level code, namely a wall follower. By keeping the right wall at a fixed distance from Pico, a fairly simple, but robust code will guide Pico through the maze.

Besides this wall follower strategy, a high level approach was used. This maze solving program uses wall (line) detection to build a navigation map, which Pico uses to create and follow path lines to solve the maze. To update the position of Pico, the odometry information, local and global maps are used.

The latter two programs use Pico’s camera to detect arrows in the maze that point Pico in the right direction.

Data processing

Pico outputs a lot of sensordata. Most of the data needs to be preprocessed for use the maze-solving algorithm. The odometry data, laserscandata and imagedata are discussed below.

Odometry data

One of the incoming data types is the odometry. The odometry information is based on the angular positions of the robot wheels. These angular positions are converted into a position based on a Cartesian odometry system, which gives the position and orientation of Pico, relative to its starting point. For the navigation software of Pico, only the x,y-position and [math]\displaystyle{ \theta }[/math]-orientation are of interest.

Due to slip of the wheels, the odometry information is never fully accurate. Therefore the odometry is only used as initial guess in the navigating map-based software. For accurate localization, it always needs to be corrected.

This correction is based on the deviation vector obtained from the map updater. This deviation vector gives the (x,y,θ)-shift that is used to fit the local map onto the global map and therefore represent the error in the odometry. Because of the necessary amount of communication between the parts of the software that create the global map and the part that keeps track of the accurate position, these parts are programmed together in one node. This node essentially runs a custom SLAM (Simultaneous Localization and Mapping) algorithm.

Laserscan data

Arrow detection

Program 1: Corridor Challenge solver

Wall avoidance

Edge detection

Centering algorithm

Program structure

Program evaluation

Program 2: Wall Follower

Wall avoidance

Dead end detection

Strategy controller

Program stucture

Program evaluation

Program 3: Map based strategy

Map builder

Odometry

Map updater

Path creator

Line follower

Program Structure

Program evaluation