Embedded Motion Control 2016 Group 6: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
Line 142: Line 142:
= '''Code Explanations''' =
= '''Code Explanations''' =
== Multi-Threading ==
== Multi-Threading ==
To make sure there different modules can run independently, multithreading has been implemented using ZeroMQ[1]. Multithreading allows a clear distinction between the role of each thread. It also improves modularity; threads should not be hindered by the absence or crashing of another thread. If the thread depends on another one for proper execution, it is easy to detect whether the latter is active with for example a polling function including a time out period. Should there be no proper result from the polling function, the thread can act accordingly and still execute the tasks it can do without the dependency.
For this project, the main goal was to not have the tasks of each module interfere with one another and to divide the code. The magnitude of this project is rather small in comparison to real life programs. Therefore, the gain in execution speed and modularity might not be that major. However, it is generally a good idea to have a solid base for a project with future considerations in mind. This is even more important with greater groups of programmers and more complex systems.


== Detection Module ==
== Detection Module ==

Revision as of 23:33, 16 June 2016

Group Members

0717576 R. Beenders r dot beenders at student dot tue dot nl
0774887 M.P. Bos m dot p dot bos at student dot tue dot nl
0772526 M. Cornelis m dot cornelis at student dot tue dot nl
0746766 S.J.M. Koopmans s dot j dot m dot koopmans at student dot tue dot nl


Design

This report describes in a concise manner the steps that were taken to come to the first draft of the program-architecture for the PICO robot. Firstly the requirements are stated on which the program- architecture is based. Followed by a global explanation of the structure of the eventual program. The components of the robot are shortly discussed followed by the (expected) specifications. Finally the interface is discussed.

Requirements

  • The robot has to operate autonomously
  • The robot is not allowed to collide with anything
  • The robot must find the door and request to pass it
  • The robot has to solve the maze within 7 minutes and within 2 tries

Functions

The global structure of the program-architecture can be seen in Figure 1. As becomes clear from the picture the code was created from the top down. The requirements were translated into a clear goal. First it was established what was needed to achieve the goal. To determine the location of the door and to solve the maze some kind of model of the world is needed. To create this model and move through the world tasks are needed. The door is found and the maze solved through a combination of the world model and tasks. To perform the tasks several skills are required. What skills are available is limited by what the robot can do. Several skills are defined based on the sensors and actuators present in the robot. A more in-depth explanation of what the tasks and skills can be found below.

Program-architecture

Goal

The requirements that directly describe the goal are encompassed below:

  • The robot must first find the door and request to pass it
  • The robot has to solve the maze as fast as possible

The collision avoidance is incorporated on a lower level because it has to hold for all tasks, this means collision detection and avoidance will be designed as a skill that is used for all tasks involving movement. The requirement that the robot has to operate autonomously is incorporated per default.

Environment context

Contains an array of structs which describe the location, amount of connections and state of orientation- points. Also contains the location and rotation information of the robot. Whether locations are described relative to each other or to some absolute origin is to be decided and will depend on the approach that is used to move the robot through the maze and update the world-map. Experiments will show whether the odometry can be sufficiently corrected by using the laser or some other form of calibration in orientation-points can be applied.

Tasks

The tasks can be divided into two groups: feedforward and feedback. Feedforward tasks are 'choose path' and 'create reference'. 'Choose path' will let the robot decide which way to drive from an intersection. 'Create reference' will create a straight line (between the walls) which the robot should follow. Feedback tasks are 'analyze crossing', 'follow reference' and 'check if door opened'. 'Analyze crossing' will allow the robot to check which directions are open and which are not. The world has four defined directions: North, West, South, East (N,W,S,E). The robot should also determine if the intersection is actually a dead-end or open-space. 'Follow reference' should let the robot follow the created reference line by use of a controller. 'Check if door opened' should make the robot wait for five seconds to see if the door actually opened (since the robot can't always see the difference between a door and a wall).

Skills

The robot has several skills which are driven by the robot (actuators and sensor), which are used in tasks. The following six skills, defined as: 'follow path', 'rotate 90 degrees', 'detect lines', 'request door', 'detect crossing/open-space/dead-end' and 'collision avoidance'. Follow path allows the robot to drive forward along a straight path towards the next node. Rotate 90 degrees makes the robot rotate for (about) 90 degrees, so that it will face the next direction (N,W,S,E). Detect lines should create straight lines out of the raw data received from the LRF. This will allow the robot to create a local version of the world with straight walls. Request door uses the beep sound which will make the door open. There are some requirements for the door to open (for example, the robot has to stand still). Detect crossing/open-space/dead-end should let the robot see when it is done traveling along a path, and it has found a crossing (or open-space or dead-end). This can be done by noticing that the walls on the left and right have disappeared. Collision avoidance should be a sort of emergency brake, which makes the robot abruptly stop once it gets too close to a wall.

Robot

The robot has some actuators and sensors, which are described below.

Components

The robot consists of both actuators and sensors. The actuators are the wheels, which can drive and rotate the robot. Besides that there is a beep function which can be used to open doors. The sensors are the Laser-Range-Finder (LRF) which can detect the distance to nearby walls, and the odometry which can measure how much the wheels rotate.

Specifications

The robots maximum speed is 0.5 m/s translational and 1.2 rad/s rotational. The LRF has a width of about 4 rad (from -2 to 2 rad), with a resolution of about 1000 points. The actual measurement accuracy of the odometry is not very important, since the wheels slip a lot. The two driving wheels aren't parallel to each other, which causes this slip. The accuracy of the information found by this odometry could therefor be very low.

Interfaces

When the program is running it should print which task the robot is performing. When a task has finished correctly a complete message is displayed. Any errors that occur will be printed with explanation of the error. A timer is used to determine how long tasks take. This information can be used to find what situations make tasks take longer and which tasks take longer in general. The code can then be optimized accordingly. Besides text there will also be a visualisation of the world-mapping. The detected lines will be drawn until a crossing is analyzed and will then be replaced by a node. As the robot solves the maze, the nodes will be connected and a node-map of the maze will be slowly built.


First Presentation

The first presentation can be found here: http://cstwiki.wtb.tue.nl/images/EMCPresentationGroup6.pdf

During the first presentation, the idea for three different submodules has been introduced. These submodules are: movement, strategy, and detection. The idea of this is to decouple the system. The detection module will always try to scan the surroundings, no matter what strategy and movement are doing. Which is exactly what is desired. Strategy will always try to compute how to handle situations and where to drive. Movement will actuate PICO to drive towards the desired point, without bumping into walls.

Pres1.png

Corridor Challenge

Code

A simple code has been set up for the corridor challenge. It consists of good detection (walls, multiple types of edges) which will definitely by useful for the maze challenge. Besides that PICO will use a few laser sensor values to drive straight through the path. If PICO gets too close to a wall, the wall will repel PICO by actuating in y direction. Once PICO gets close enough to the second edge (the one which is the furthest away from PICO at the start), PICO will stop and start rotating. Then PICO should at some point detect two sharp edges and rotate until PICO is facing the exit. The last step is to simply drive forward (again with laser sensor values to stay away from walls).

Challenge

The challenge failed twice, with the same mistake. The problem was that the second edge was detected twice (two points very close to each other were both set as edge-points). That is why PICO thought that the exit was between those two points, and drive directly into that edge.

Retry

During the testing time, we decided to test PICO again, with new code. The problem for duplicate edges has been fixed by stating that two edges must be at least a certain distance away from each other. Besides that some values (for example push-force from walls) have been tuned better. With these settings, PICO can solve the maze in about 14 seconds.

It should be noted that due to some miscommunication, we didn't get the opportunity to get two test hours before the corridor challenge. If we had the test time (which was used here to optimize the corridor challenge code) before the challenge, we may have been able to successfully end the challenge.

Ezgif-4223191716.gif


Code Explanations

Multi-Threading

To make sure there different modules can run independently, multithreading has been implemented using ZeroMQ[1]. Multithreading allows a clear distinction between the role of each thread. It also improves modularity; threads should not be hindered by the absence or crashing of another thread. If the thread depends on another one for proper execution, it is easy to detect whether the latter is active with for example a polling function including a time out period. Should there be no proper result from the polling function, the thread can act accordingly and still execute the tasks it can do without the dependency.


For this project, the main goal was to not have the tasks of each module interfere with one another and to divide the code. The magnitude of this project is rather small in comparison to real life programs. Therefore, the gain in execution speed and modularity might not be that major. However, it is generally a good idea to have a solid base for a project with future considerations in mind. This is even more important with greater groups of programmers and more complex systems.

Detection Module

The core of the detection module is the loop defined in code snippet (1). The loop can be summarized as an execution of Simultaneous Localization And Mapping with an Extended Kalman Filter (EKF-SLAM). The execution fires whenever there is new data available as defined by the function in code snippet (2).


To be able to properly execute EKF-SLAM, landmarks have to be defined and detected. The corners of the maze are used as landmarks to solve the maze using EKF-SLAM. The corners are detected using the function defined in code snippet (3). This function checks for range jump and for laser point which neighborhood approximately form a 90 or 270 degree angle. Approximation has to take place due to the fact that the world is not ideal; the LRF is subject to disturbances. Consequently, many "corners" may be detected close to each other (which was the main problem of our corridor challenge implementation). To counteract this phenomenon, an algorithm extracts the best corners from the set of assumed corners. This algorithm is shown in code snippet (4). The algorithm converts all assumed corners into subsets of assumed corners close to each other and returns the corner that is closest to 90 or 270 degrees per subset.


The actual EKF-SLAM implementation is shown in the figure below, take from [2]:

EKF SLAM steps.png


The main difference between this algorithm and the actual required algorithm is that there are no visible signatures present for landmarks, therefore they only consist of an actual position in the world defined by polar coordinates. Furthermore, using the transformation matrices shown in steps 3 and 14 are computationally very inefficient. The matrices are replaced by "more computation friendly" lines of code within the algorithm. The algorithm is shown in code snippet (5), where each step corresponding to aforementioned algorithm is stated. Note that that the gradient matrix H at step 15 is different than in [2]. For our implementation the standard gradient did not work and had to be corrected. The matrices are realized using the OpenCV library [3]

Strategy Module

Movement Module

Final Presentation

The final presentation can be found here: http://cstwiki.wtb.tue.nl/images/EMCFINALPresentationGroup6.pdf

Maze Challenge

Lessons Learned

During the entire process of this course (so not only coding), a lot of lessons have been learned. The most important ones are:

  • It is necessary to make a good planning, and if possible assign a project leader who keeps the planning in mind. This way the priorities will remain correct (which has not always been the case for us) and the overall progress will be steadier.
  • Communication is the key to good teamwork. Bad communication (or lack of communication) caused a few misunderstandings. This meant that sometimes, a part of code was missing or not compatible with the rest of the code, causing the entire programm to fail.
  • It is better to first focus on a working code, before turning towards details.


Code snippets

(1) Detection main loop: https://gist.github.com/anonymous/fb926ffddbfac24d19c9268c61de8f5c

(2) Detection data check: https://gist.github.com/anonymous/93d2ac7d88fde436319b10270279adb3

(3) Detection detect edges: https://gist.github.com/anonymous/097bb3bdfed2ed05f741589a9ffb9363

(4) Detection best corners: https://gist.github.com/anonymous/f48db1088560bb15d2f7d6592bde0dc9

(5) Detection EKF-SLAM algorithm: https://gist.github.com/anonymous/6cc8fbd9e7c1e685f2382b13a73f1509


References

[1] ZeroMQ main site: http://zeromq.org

[2] Sebastian Thrun, Dieter Fox, Wolfram Burgard (1999-2000), Probabilistic Robotics

[3] OpenCV main site: http://opencv.org