Embedded Motion Control 2017 Group 2: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 120: Line 120:


== System overview post-corridor ==
== System overview post-corridor ==
After the corridor challenge it was decided to take a slightly different approach on the [[Embedded_Motion_Control_2017_Group_2#SLAM|SLAM algorithm]] and split it into SLAM for localisation and [[Embedded_Motion_Control_2017_Group_2#feature detection|feature detection]]. This is change has been made since SLAM is already implemented and tested within ROS and known to be robust, and therefore focus can be placed on other crucial tasks as [[Embedded_Motion_Control_2017_Group_2#primitive detection|primitive detection]].


== Maze challenge ==
== Maze challenge ==

Revision as of 19:41, 8 June 2017

Welcome on the group 2 page.

Weekly meetings are on Friday morning.

Group Members

Name: Report name: Student id:
Matthijs van der Burgh M.F.B. van der Burgh 0771381
Joep Linssen J.M.H.G.H. Linssen 0815502
Sil Schouten S. Schouten 0821521
Daniël Pijnenborg D.H.G. Pijnenborg 0821583
Rens Slenders R. Slenders 1028611
Joeri Roelofs J. Roelofs 1029491
Yanick Douven Y.G.M. Douven Tutor


Minutes

Minutes of team meetings are available here: [1]

Initial design

The initial design can also be found in File:Initial Design.pdf

Introduction

The goal of this project is to autonomously solve a maze with a robot (PICO) as fast as possible. The robot has sensors, actuators and a computer on board to navigate through the maze. The computer runs on Linux 14.04 and the program language used for the software is C++. In this section the initial design is discussed, which consist of the requirements, functions, components, specifications and interfaces. This design will be used to build structured software and define required interfaces.

Requirements

To solve the maze problem the following are required

  • Finding the exit
  • Finding and opening the door
  • Avoiding obstacles at all times
  • The maze should be solved within 5 min
  • All tasks must be executed autonomously
  • The system should work on any maze
  • The robot mustn't get trapped in loops

Functions

The functions which have to be implemented are subdivided into the different contexts. Motion functions describe how the software gets input from and supplies output to PICO. Skill functions describe how different skills are executed in the software. Lastly Task functions implement task scheduling.

The motion functions have been determined as:

  • Basic actuation
    • Give PICO the ability to move around and rotate
  • Getting information
    • Receive information from LRF and odometry sensors.
  • Opening door
    • Make a sound to open door.

The more advanced skill functions are:

  • Mapping
    • Use information from ”Get information” to determine and save a map of the environment. This creates the world model for PICO.
  • Localisation
    • Determine where PICO is located in the map created in ”Mapping”. This will most likely be implemented in the same function.
  • Object avoidance
    • Use current LRF data to make sure PICO does never hit a wall.
  • Maze solving
    • Use the map in combination with current and previous locations within the map to determine a route out of the maze.
  • Door and exit detection
    • Using the map to detect if possible doors have been found and if PICO has finished the maze.

The task control implements switching between the following tasks:

  • Map making
    • Done throughout the challenge to improve current map.
  • Finding doors
    • Done in first stage of the challenge when no door has been found yet
  • Finding the exit
    • Once a door has been found and opened the task is to go through the door and finish the challenge.

System overview pre-corridor

Corridor challenge

System overview post-corridor

After the corridor challenge it was decided to take a slightly different approach on the SLAM algorithm and split it into SLAM for localisation and feature detection. This is change has been made since SLAM is already implemented and tested within ROS and known to be robust, and therefore focus can be placed on other crucial tasks as primitive detection.

Maze challenge

Initialisation

The maze solver relies on the directions from the primitive detection being in correct in the world frame, or differently stated directions a decision is made NORTH, EAST, SOUTH, WEST with respect to the initial rotation of the robot. This adds the requirement of the robot being initialised perpendicular to a wall. Since all corners are approximately right angled this results in directions at every decision point corresponding to NORTH, EAST, SOUTH, WEST.

Solving this problem is fairly straight forward using feature detection. From the first two features, which are always in the same section, the angle between the two points is calculated. This angle is then used to turn Pico in the same direction resulting in an alignment to the first wall Pico sees, this is repeated until the angle is within a threshold of approximately 3 degrees. This process is shown in simulation in Fig. figfigfigfigfigfigfigfigfigfigfigfigfigfigfigfigfigfigfigfig

SLAM

Feature detection

For determining the locations of decision points corner and segment features are required. Corner features are defined as locations where two, connected walls meet i.e. a corner, segment features are the features which are not corners which describe beginning and end points of wall segments. In Fig. figfigfigfigfigfigfigfigfigfigfigfigfigfigfig these features are shown, corner features in red and segment features in blue. Since the laser data contains values showing out-of-range data these values are set equal to 100 indicating an open section. In these out-of-range sections no features are set.

The strategy to find segment features is based on the difference between two consecutive laser points, if the difference is greater than a threshold value (set at ................) a segment is detected and both points are set as segment features. For the corner features a split and merge algorithm is applied to every segment (from an even segment feature to the next segment feature, i.e. from 0 to 1 and 2 to 3).

The split and merge algorithm works in the following sequence:

  1. starting node = segment beginning, ending node = segment end
  2. dx and dy are calculated between starting and ending node
  3. distance between measured data and straight line is calculated as:
    dist = abs( dy*x - dx*y + x(start)*y(start) - y(end)*x(start) )/ sqrt( dy^2 + dx^2 )
  4. check if all points are within threshold
    if(maximum distance > threshold)
    
    ending node = index of maximum value
    index of maximum value is a corner node
    goto 2
  5. done

The split and merge algorithm is also shown in Fig. figfigfigfigfigfigfigfigfigfigfigfigfigfigfig, with some resulting examples in Fig. figfigfigfigfigfigfigfigfigfigfigfigfigfigfig.

Some examples of feature detection using split and merge strategy

Primitive detection

Maze solver

Set-point generation

Potential field

Files

File:Initial Design.pdf