https://cstwiki.wtb.tue.nl/api.php?action=feedcontributions&user=S119349&feedformat=atomControl Systems Technology Group - User contributions [en]2024-03-29T00:08:42ZUser contributionsMediaWiki 1.39.5https://cstwiki.wtb.tue.nl/index.php?title=E-box&diff=28083E-box2016-06-09T16:51:42Z<p>S119349: /* "Undefined reference" - strict ordering */</p>
<hr />
<div>'''''This is the project page of E-box and related projects (E-scope)'''''<br />
<br />
<br />
<br />
= DEVELOPERS =<br />
This section contains necessary information for the developers and programmers who use E-Box in their projects.<br />
<br />
== Using the E-box on your own Linux install ==<br />
It is possible to use the E-box on your own Linux installation, should you already have one. However, since much of the software is written without considering portability, mileage may vary on any particular system. This section may be used for common fixes for installing the E-box on your own Linux install.<br />
<br />
=== "Undefined reference" - strict ordering ===<br />
Contrary to earlier versions, in new versions of gcc the order of parameters passed on the command line '''does''' matter. When using the default makefile template for the E-box software, you will find that when building software, there will be complaints about simple functions like 'sin' and 'floor' being undefined references.<br />
<br />
This can be fixed by changing the makefile template, present in <br />
/usr/local/MATLAB/R20*/rtw/c/ectarget/ec_unix.tmf<br />
Change this to the following makefile template [[file:Ec_unix.tmf.zip]]. This makefile changes the order of paramters passed to gcc, which will make some basic models build without error.<br />
<br />
== E-Box Programming ==<br />
The user manual for the E-box and other EhterCAT slaves can be found here: [[E-box/User_manual|User manual]]<br />
<br />
The manual for installing a realtime linux system with EtherCAT support can be found here:[[E-box/Install_manual|Install manual]] and [[E-box/Install_manual/FAQ|Frequently Asked Questions]]<br />
<br />
[[Media:EBOX_software_programming.pdf|Programming instructions]]<br />
<br />
[[Media:eclib.pdf|Manual EtherCAT library]]<br />
<br />
<br />
== News / Changes ==<br />
This section contains the important changes and news to the software<br />
<br />
* 2011/11/14 Matlab R2011b is supported from revision (218)<br />
* 2011/08/27 Changed Timer_posix to Timer_posix_AK from revision (142)<br />
* 2011/07/17 SVN is working again<br />
* 2011/07/12 Beckhoff EL3104 will be supported from next revision (106)<br />
<br />
<br />
== References ==<br />
[1] Jeroen de Best and Roel Merry, ''Real-Time Linux for Dummies'', DCT2008.103 [http://www.mate.tue.nl/mate/pdfs/10018.pdf PDF]<br />
<br />
[2] Jeroen de Best and Koen Meessen, ''Realtime Linux'', [[Realtime_Linux]]<br />
<br />
[3] Simple Open EtherCAT Master [http://soem.berlios.de SOEM website]<br />
<br />
<br />
----<br />
----<br />
<br />
= STUDENTS =<br />
E\Boxes are used in some of the courses. In this section, students can find the necessary information regarding to these courses. <br />
<br />
<br />
== Common Information ==<br />
'''The information and manuals presented in this section is used for all courses.''' The steps given below should be followed before conducting the experiments of 4G031-Printer Casus and 4K410-Digital Motion Control. The courses specific information is given in [[#4G031 - Printer Casus | 4G031-Printer Casus]] and [[#4K410 - Digital Motion Control | 4K410-Digital Motion Control]] sections.<br />
<br />
=== Installation of Ubuntu with Wubi ===<br />
'''WARNING: This installation will only work on a normal installation of the Windows Operating System as it depends on the Windows Boot loader. If you use another operating system or if you are unsure please contact one of the course supervisors. If you have a Mac, Windows needs to be present or installed in order to use this tutorial. For more information, see the guide below.''<br />
<br />
Wubi is a linux installer for Windows which can install and uninstall Ubuntu in the same way as any other Windows application in a simple and safe way. Ubuntu will be installed within a file in the Windows system ''ebox_root.disk''. This file is seen by Ubuntu as a real hard disk.<br />
<br />
The installation of Ubuntu used for this course takes up '''30 GB of free-space'''. Therefore, it is recommended to install Ubuntu by connecting to TU/e network via an ethernet cable. Using VPN or wireless is strongly discouraged!<br />
<br />
The wubi-installer can be obtained from the TU/e network share: '''\\ai-stosrv02\EBox'''. Install by opening the executable "wubi_dd-mm-yy-time". Because the installation is quite large this will take some time, especially creating the virtual disk, so please be patient. Connected to gigabit network the installation will take approximately 15 minutes, on a 100 mbits network connection it will take about an hour. <br />
<br />
When you have installed Ubuntu with Wubi, you can start ubuntu by rebooting your computer. A menu will appear during startup which allows you to choose whether to run Windows and Ubuntu. When you choose Ubuntu in this menu, you will go to a new bootloader called GRUB. Here you can choose which version of Ubuntu to run as well as alter the start-up commands for running Ubuntu. Unless you are having problems starting up, you should just select the default version by pressing enter. When prompted for account information use the following:<br />
<br />
#Username: ''ebox''<br />
#Password: ''ebox123''<br />
<br />
You can change the password if desired.<br />
<br />
=== Installing software for experiments ===<br />
<br />
==== Activating Matlab ====<br />
The version of Matlab installed on Ubuntu needs to be activated. To do so, start matlab from a terminal window with <br />
# ''sudo su''<br />
# ''matlab''<br />
and follow the instructions and use the activation key linked on the campus software site. [http://w3.tue.nl/nl/diensten/dienst_ict/services/services_wins/campussoftware/mworks/registratie_en_activatie/ Matlab activatie]<br />
<br />
'''If the student activation key doesn't work, take the employee activation key'''<br />
<br />
==== Update to the latest experiment software ====<br />
The software to perform experiments is already present, but possibly outdated. To update the software and simulink models used, you have to checkout the latest stable revision from the SVN (subversion) server. This can easily be done using the <code>svn_update</code> script:<br />
<br />
# Open a terminal (via the icon on the desktop)<br />
# Type ''svn_update''<br />
#If you get an error mentioning "an unversioned copy of this file already exists", type: "rm *path to file and filename". For example, "rm /usr/Desktop/unversionedfile.m".<br />
<br />
If you are experiencing any problems (bugs/errors) with the experiment software, then first make sure you have checked out and installed the latest software revision from the SVN by repeating steps [[E-box#Update_to_the_latest_experiment_software | Update to the latest experiment software]] and [[#Compiling and installation the software | Compiling and installation the software]]. Since this update will also update the simulink templates used in the experiments, it is recommended to save any changes made to these files with a different filename and/or in a different location, preferably on your harddisk.<br />
<br />
==== Compiling and installation the software ====<br />
First start matlab from a terminal (if not done already):<br />
# Open a terminal<br />
# Type ''sudo su''<br />
#Type "matlab"<br />
<br />
To obtain a fresh copy of the latest version:<br />
# Run svn_update according to subsection [[#Update to the latest experiment software | Update to the latest experiment software]] in a terminal window.<br />
# Change the Matlab current directory to ''/home/ebox/svn/trunk/src/E-box''<br />
# Run ''make_all_clean'' in matlab<br />
# Run ''make_all_install'' in matlab, answer yes to TU/e toolboxes installation question<br />
<br />
=== Preparation prior to performing experiments ===<br />
<br />
==== Connecting the E/BOX, changing ethernet index number ====<br />
Connect the power supply to the E/BOX. Use a network cable to connect your laptop to the In-port of the E/BOX. Since the index number of the Ethernet port you are using can vary for different pc's, the right number has to be set:<br />
# Open a terminal<br />
# Type ''sudo geteth'' (returns the right port number, only when the E/box is connected)<br />
# Start matlab (type ''sudo matlab'')<br />
# Type ''changeeth(x)'' in the matlab command window where x is the port number found with geteth.<br />
<br />
These commands set the right ethernet index number, this needs to be done each time a new software revision is checked out.<br />
<br />
==== Save data ====<br />
To be safe, you should save the data you obtain via experiments outside of Ubuntu. This means that it will not be deleted if Ubuntu is uninstalled. You can save it to your Windows hard disk or use a web service such as Dropbox or Box. The partition of your hard disk on which you have installed Ubuntu can be found under ''\host'', any other partitions can be found in ''\media'' or in the places menu from the taskbar.<br />
<br />
<br />
<br />
----<br />
'''Now, follow the steps specific for [[#Printer Casus - 4G031 | Printer Casus course]] or [[#Digital Motion Control (DMC) - 4K410 | Digital Motion Control course]] ! ! !'''<br />
----<br />
<br />
<br />
<br />
== Printer Casus - 4G031==<br />
The section [[#Performing Real-Time experiments | Performing Real-Time experiments]] includes an introduction to the use of the preparation and the actual execution of experiments. To use a setup, you can subscribe at the appropriate registration lists in SEL3 (1 setup per day per group).<br />
<br />
Almost every day, for several hours assistance available in SEL3, where you can ask your questions. For questions during the experiments, please contact '''Bas van Schepop of your group tutor'''. The project coordinator is Rene van de Molengraft ([[File:Mail_Rene.png]] , tel 2998, GEM-Z -1.141).<br />
<br />
=== Hardware === <br />
The case is a stripped A3 HP printer, see Figure 1. The required hardware for the experimental set-up is included in Table 1. You have to take your own notebook to do experiments.<br />
<br />
[[File:Setup_explained.png|center|thumb|500px|Figure 1]]<br />
<br />
Required hardware<br />
# Notebook with E/box image installed<br />
# Experimental HP printer setup (see Figure 1) including cables<br />
# Amplifier including BNC cable<br />
# E/box including ethernet cable<br />
''Table 1''<br />
<br />
The print head is driven by a DC motor using a belt transmission. The position of the print head is measured by means of a linear encoder. The optical encoder sensor is mounted in the printhead. On the left side an end-switch is mounted that is used to position the printhead initialization. Using the E/box it is possible to perform real-time experiments on the printer, i.e. measuring and sending signals in real-time. The amplifier provides the necessary current gain of the control signal transmitted from the E/box to the motor is controlled.<br />
<br />
The cables of the hardware should be mounted as shown in Table 2 below. Furthermore, the amplifier and E/boxof supply cables must be provided. CAUTION: NEVER turn the amplifier on (ON) before a control signal is defined by the ectarget software (see [[#Performing Real-Time experiments]]). A crash of the printer may result. On some older amps is still 0 to 2.5 V instead of + / - 2.5 V. This is incorrect, all amplifiers have a range of + / - 2.5 V.<br />
<br />
''Table 2''<br />
{| width="50%" border="1" cellspacing="0" cellpadding="2"<br />
! align="left"|I/O Printer<br />
! align="left"|I/O Amplifier <br />
! align="left"|I/O E/box <br />
! align="left"|I/O Notebook<br />
|-<br />
| Motor input || LOAD || || <br />
|-<br />
| Encoder output || || Encoder 1 || <br />
|-<br />
| End-switch I/O || || DIGITAL I/O ||<br />
|-<br />
| || +/ − 2, 5 V in || Analog out 1 ||<br />
|-<br />
| || || Ethernet port || Ethernet port<br />
|}<br />
<br />
<br />
=== Performing Real-Time experiments ===<br />
<br />
The simulink template for the experiments can by typing ''printer01'' in the matlab consolue. Make sure you save any changes to the model with another file name and/or in another location or you might lose your work when updating the svn or reinstalling Ubuntu.<br />
<br />
==== External mode ==== <br />
The following steps are have to be followed to execute the experiment:<br />
# Open the simulink file and press "Ctrl-B" to start building the real-time code<br />
#If you get any errors, check to see if there is a Ref3 block present. If there is, you first need to open it and accept a trajectory before you can continue.<br />
# Switch on the printer setup<br />
# Open a terminal<br />
# type ''sudo su''<br />
# Go to the folder where you just built the file (the file is built to the current directory of Matlab). Use ''cd'' to specify the path.<br />
# Note: If there is a ref3 block present (yellow), you must first give it a path to follow and accept , otherwise it will throw an error.<br />
# Type ''./printer01 -w'' in the terminal to execute your experiment. The ''-w'' part means the realtime application will be run in external mode. If you have renamed the model, ''printer01'' in the previous commands has to be replaced by the name of your Simulink model.<br />
# The external mode requires you to connect the Simulink model to the real-time application and start it manually from the Simulink. This has to be done by choosing "Connect to Target" and "Start Real-time Code" from the Simulation menu respectively.<br />
#If you get an error, especially one that mentions the "eth bind failed", go to Simulation -> Configurations Parameters -> Code generation and manually change the ID to the one you got from the command "geteth".<br />
# The standard simulation time is 30 seconds, to change this, in Simulink go to "Simulations", "Configuration Parameters" and change the "Stop time".<br />
# After the experiment has finished you can return to Matlab, load your data ("load printer01.mat") and perform the required actions to post-process the measurement data. Save all your commands in a Matlab m-file. Created variables have the prefix ''rt_''<br />
# The ''-w'' option can be omitted to let the real-time application run in stand alone mode.<br />
<br />
==== Usefull tips ==== <br />
<br />
To get an idea of the system behavior of the open-loop system. One of more suitable input signal for the motor can be chosen, while recording the resulting output of the system.<br />
<br />
* By applying a sinusiodal input signal (and linear system behavior) it is possible to determine a point of the Bodediagram. With several of these experimenten, a complete Bodediagram can be constructed.<br />
* With simple input signal, e.g. constant for a short period, then zero, gives relevant information about direction of motion, amount of friction, etc.<br />
<br />
For more information regarding system identification, the following book can be read: Feedback Control of Dynamic Systems, Franklin, Powell.<br />
<br />
Take an extra phaselag in the open-loop into account, which has the order of magnitude of <code>ωT</code>, where ''T'' is the sample time in <code>[s]</code>, equal to the ''Fixed step size'' from the simulation parameters. Futhermore, <code>ω</code> is the frequency in <code>[rad/s]</code>. Increasing the open-loop gain will eventually always result in instable behevior.<br />
<br />
Look at [http://cstwiki.wtb.tue.nl/index.php?title=Printer_Casus_Ebox#Frequently_Asked_Questions Frequently Asked Questions] if there are any questions. This wiki will be updated during the casus on a regular basis.<br />
<br />
==== Troubleshooting ==== <br />
<br />
'''''error:'''''<br />
<br />
Invalid setting for fixed-step size (0.001) in model 'printer01'. All sample times in your model must be an integer multiple of the fixed-step size.<br />
<br />
The sample time period (0.00048828125) of 'printer01/Generated S-Function' is not an integer multiple of the fixed step size (0.001) specified for model.<br />
<br />
'''solution:'''<br />
<br />
* press Ctrl+e<br />
<br />
* select: 'solver'<br />
<br />
* change 'Fixed-step size (fundamental sample time):' under 'Solve options' to value "1/2048"<br />
<br />
<br />
'''''error:'''''<br />
<br />
Error(s) encountered while building model "printer01"<br />
<br />
'''solution:''' choose a different name for the model. Using "printer01.mdl" can cause problems because the file can be shadowed by the original "printer01.mdl" in a folder higher on the Matlab path.<br />
<br />
<br />
== Digital Motion Control (DMC) - 4K410 ==<br />
<br />
=== Performing Real-Time experiments ===<br />
<br />
The simulink template for the experiments can be found in the folder ''Templates\Specific\pato'' inside the ''E/BOX'' folder (see [[#Compiling and installation the software | Compiling and installation the software]]). Make sure you save any changes to the model with another file name and/or in another location or you might lose your work when updating the svn or reinstalling Ubuntu.<br />
<br />
==== External mode ====<br />
The following steps are have to be followed to execute the experiment:<br />
# Open the simulink file and press "Ctrl-B" to start building the real-time code<br />
#If you get any errors, check to see if there is a Ref3 block present. If there is, you first need to open it and accept a trajectory before you can continue.<br />
# Switch on the pato setup<br />
# Open a terminal<br />
# type ''sudo su''<br />
# Go to the folder where you just built the file (the file is built to the current directory of Matlab). Use cd to specify the path.<br />
# Type ''./pato01 -w'' in the terminal to execute your experiment. The ''-w'' part means the realtime application will be run in external mode. If you have renamed the model, ''pato01'' in the previous commands has to be replaced by the name of your Simulink model.<br />
# The external mode requires you to connect the Simulink model to the real-time application and start it manually from the Simulink. This has to be done by choosing "Connect to Target" and "Start Real-time Code" from the Simulation menu respectively.<br />
#If you get an error, especially one that mentions the "eth bind failed", go to Simulation -> Configurations Parameters -> Code generation and manually change the ID to the one you got from the command "geteth"<br />
# The standard simulation time is 30 seconds, to change this, in Simulink go to "Simulations", "Configuration Parameters" and change the "Stop time".<br />
# After the experiment has finished you can return to Matlab, load your data ("load pato01.mat") and perform the required actions to post-process the measurement data. Save all your commands in a Matlab m-file. Created variables have the prefix ''-rt''<br />
# The ''-w'' option can be omitted to let the real-time application run in stand alone mode.<br />
<br />
==== Qadscope ====<br />
Qadscope is a program to generate outputs and read the input ports of the E/BOX, it can be seen as a digital oscilloscope and spectrum analyzer. A few guidelines:<br />
* Qadscope can be started by typing qs in the Matlab prompt.<br />
* Signals can be generated by the signal generator.<br />
* Every input- and output port can be set on and off separately (make sure the right ports are enabled and disabled).<br />
* When relations between signals, for example a FRF, are computed, the red labeled signal is the input signal and the green labeled input is the output signal.<br />
* The framelength and sample frequency can be adjusted manually.<br />
* When you are done using Qadscope you can just shut it down. All data is available in the Matlab workspace in a structure named "data".<br />
* Always stop a measurement first before saving the figure.<br />
<br />
====EC_target settings====<br />
<br />
If you have made you own simulink file (without starting from the template), the function ectarget settings can be used to properly set all simulation parameters. A help file is included. Usage: ectarget settings(filename,netif,ts), with netif equal to the ethernet port number and ts equal to the desired sample time.<br />
<br />
----<br />
<br />
== Frequently Asked Questions ==<br />
<br />
; Ubuntu start niet<br />
: Boot onder windows en controleer de grootte van de ''ebox_root.disk'' file. Deze moet 31.188.844.544 bytes groot zijn (zie properties/eigenschappen). Is dit niet het geval, dan kun je de ''ebox_root.disk'' handmatig kopieren van de server.<br />
<br />
; Tijdens compilen krijg je een error of een conflict in sampling tijden (0.001 en 1/2048)<br />
: Update en installeer de software (vanaf r355 is dit opgelost) [[Printer_Casus_Ebox/Experiments_manual#Update_to_the_latest_experiment_software|Update to the latest software]] en [[Printer_Casus_Ebox/Experiments_manual#Compiling and installation the software|Compiling and installation the software]]. Start vervolgens vanaf een nieuwe kopie van het ''printer01'' model<br />
<br />
; Tijdens compilen krijg ik errors met "undefined function or variable ref_part" <br />
: Dubbelklik op het ''ref3'' blok en klik op ''accept'' voor het builden<br />
<br />
; ''svn_update'' werk niet<br />
: Voer het volgende commando uit in een terminal<br />
<nowiki>svn checkout https://e-box.wtb.tue.nl/svn/e-box/trunk/src/E-box/ /home/ebox/svn/trunk/src/E-box/</nowiki><br />
<br />
; In de terminal waar de executable draait komt repeterend<br />
Error occurred getting packet header.<br />
Error occurred in rt_PktServerWork.<br />
Disconnecting from host!<br />
<br><br />
Error occurred getting packet header.<br />
Error occurred in rt_PktServerWork.<br />
Disconnecting from host!<br />
:te staan. Dit is een bekende Linux gerelateerde bug die meestal op te lossen is met een reboot van de pc, of door een andere pc gebruiken.<br />
<br />
; De scope in Simulink laat alleen het laatste stuk meetdata zien<br />
: Gebruik "To Workspace" blokken. Voor een model met naam ''printer_01.mdl'' wordt er een ''printer_01.mat'' file aangemaakt. Daarin staan de gemeten signalen, bijvoorbeeld ''rt_simout''. De naam van het signaal ''simout'' kan je wijzigen, dubbelklik op het "To workspace" blok!<br />
<br />
=== Killing a process in Linux===<br />
If your laptop or an application (like Matlab) stops reacting you can try the following to kill the process giving problems:<br />
<br />
'''Method 1'''<br />
<br />
* Type ''sudo su'' (superuser) to get the necessary rights.<br />
* Type ''xkill'', your mouse pointer will change into a cross from an arrow.<br />
* Click on the window you want to close.<br />
<br />
'''Method 2'''<br />
<br />
* Open a new terminal.<br />
* Type ''sudo su'' (superuser) to get the necessary rights.<br />
* Type ''ps -aux'' or ''ps -e'' (get a list of all processes and ID). You can ao use the command top to get a list of the processes using the most CPU time.<br />
* Find the ID of the process you want to kill.<br />
* Type ''kill -9 <process id>''<br />
* Alternatively if you know the process name you can use the pkill command so to kill matlab use ''pkill matlab''<br />
<br />
=== How to reboot cleanly even when the keyboard/mouse are frozen ===<br />
There are normally several ways to reboot cleanly using key combinations such as:<br />
* ''CTRL + ALT+ F2'' (get to a terminal, you can then run top/kill/pkill to discover and kill the offending process)<br />
* ''CTRL + ALT + Backspace'' (kills the graphic session and goes to a console, all graphical applications are terminated too)<br />
* ''CTRL + ALT + DEL'' (reboot)<br />
* If you appear to have no control of the keyboard ''ALT+ SYSRQ + R'' can help to regain control of it.<br />
* ''ALT+ SYSRQ + R-E-I-S-U-B'' (forces a clean reboot even when the keyboard is not responding)<br />
<br />
The last one is in fact a sequence of commands, so to use it, hold down ''ALT+SYSRQ'' and then press the keys ''R-E-I-S-U-B'' in sequence with a couple of seconds between each of the commands.<br />
<br />
== Installing Ubuntu on a Mac==<br />
<br />
Ubuntu can be installed on a Mac in multiple ways, but the easiest way to use the Ebox image is to first install Windows, if it is not already present. You will need a Windows 7 ISO or DVD, and a usb drive of at least 4 GB. Since the Ebox installation itself is 30GB, the minimum size of your WIndows 7 installation is to be about 50GB. If you have over 4 GB of memory installed, adding twice the size of your ram to this amount will make you able to skips certain steps in Step 2.<br />
<br />
A Microsoft Windows 7 [http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65804/X15-65804.iso image] can be downloaded for free from Microsoft, and used up to 180 days. While not too difficult, an installation could always go wrong. Proceed at your own risk.<br />
<br />
=== Step 1: Installing Windows ===<br />
<br />
* Open the Bootcamp Assistant, found in /Applications/Utilities/.<br />
* You will see the following screen if you own a Mac without a DVD drive. If you do own a Mac with a DVD drive, you'll first need to burn the Windows 7 image to a blank DVD. A tutorial can be found, for example, [http://hints.macworld.com/article.php?story=20060619181010389 here].<br />
[[File:Macs1.png|600x600px]]<br />
*Insert the USB stick, and click continue. You can now point the Assistant to the ISO file in the following screen. Click continue.<br />
[[File:Macs2.png|600x600px]]<br />
*When done, Bootcamp Assistant will warn you to make a backup of your Mac before you attempt to install WIndows. This is always a good precaution, but particularly due to the need to format the partition on which Windows will be installed. It is possible to select the wrong disk here, erasing all data on your Mac.<br />
[[File:Macs4.png|600x600px]]<br />
*Restart your Mac. If the Windows installation does not start automatically, hold the alt key after the grey screen you see when your Mac turns on.<br />
*Follow the instructions until you see the following screen.<br />
[[File:Macs5.png|600x600px]]<br />
*Make sure to select the partition marked '''BOOTCAMP''', then hit "Drive Options (advanced)".<br />
[[File:Macs6.png|600x600px]]<br />
*In the changed screen, hit "Format".<br />
*Proceed with the installation. Windows will restart your computer several times, and your Mac should be configured to start up to Windows automatically. If not, use the alt key option mentioned above. When WIndows is installed, find the Bootcamp tools on the USB stick and install them. This will create an icon in the Windows system tray, which can be used to make Mac the default operating system again. In order to do this, after following the rest of the steps in this tutorial, click on it and hit "Restart to Mac OS X".<br />
<br />
=== Step 2: Installing Ubuntu ===<br />
Follow the installation guide above to install Ubuntu, if there is not enough free space available, and your Windows disk is at least 50 GB, you can follow the following two tutorials.<br />
<br />
*[http://www.pcmtechhelp.com/2011/03/14/disable-the-paging-file-on-high-memory-ram-systems-faster-7-part-24/ Disable] your page file, which saves the size of your memory on disk.<br />
*[http://helpdeskgeek.com/windows-7/windows-7-delete-hibernation-file-hiberfil-sys/ Disable] your hibernation file, a file created when your Mac goes to sleep with Windows running. This also reduces your available filesize by the amount of memory you have, which causes considerable trouble on 16GB memory systems.<br />
<br />
===Step 3: Booting Ubuntu and installing additional drivers ===<br />
<br />
If your Mac is extremely compatible with Ubuntu, you will be able to skip this step. To find out, reboot your Mac while holding the alt key, and select WIndows. You will see a screen asking you to choose either Windows or Ubuntu, now choose Ubuntu. If all goes well Ubuntu will complete it's installation and you will be greeted by a pink screen asking you to login.<br />
<br />
When the startup sequence hangs on a Mac, the most likely reason is an incompatible video card driver, though it could be any number of reasons. In order to troubleshoot this, first try disabling the standard video card driver.<br />
<br />
*Boot your Mac the same way again until you see a screen marked "GNU Grub version ...". Depending on how your installation went, you will see different options here including, but not limited to, several versions of Windows and Ubuntu.<br />
*Select the top Ubuntu option (which should already be selected), and hit "e". You will see something akin to the following screen. (choice of lamp not my own)<br />
[[File:Macs7.JPG|600x600px]]<br />
*Find the line that says "quiet splash", and add "xforcevesa". If the line isn't present, find the end of the line before "initrd" and put it there.<br />
*Hit "ctrl" and "x" together to boot Ubuntu. If this does not work, repeat the same steps with "acpi=off and vga=824".<br />
<br />
==Video card driver==<br />
After booting with the aforementioned commands, Ubuntu should boot with a software emulation of the graphics card. While compatible, this does not provide a responsive user interface. By installing closed source drivers, the potential of most videocards can be unlocked.<br />
<br />
*Go to System ->Administration ->Hardware drivers. You will see a screen similar to the one below, where you can install custom drivers. Try the one at a time, so if one causes Ubuntu to not boot anymore, you will know which one is the problem.<br />
*Choose one of the videocard drivers, preferably the most recent one, and hit install. <br />
<br />
==Wifi drivers==<br />
<br />
Most Macs come with a Broadcom wifi chip, which can also be installed via the same "Hardware Drivers" screen as the video card driver above. On some laptops this has been known to cause problems, and a different version needs to be installed via the terminal. An example is given below for the Broadcom BCM43 chip, which can be found for example on the 2008 Unibody and 2012 Retina Macbook Pro's.<br />
<br />
*Open the Terminal (Applications -> Accessories -> Terminal).<br />
*Copy paste "sudo apt-get install b43-fwcutter" into the prompt. <br />
*Hit yes on all following prompts.<br />
<br />
If this does not work, you may need to try and find additional commands on the internet. Different chips need different solutions, or yours may just work out of the box. In any case, wired ethernet usually works by default.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=E-box&diff=28080E-box2016-06-09T14:51:55Z<p>S119349: /* DEVELOPERS */ E-box on your own Linux distro</p>
<hr />
<div>'''''This is the project page of E-box and related projects (E-scope)'''''<br />
<br />
<br />
<br />
= DEVELOPERS =<br />
This section contains necessary information for the developers and programmers who use E-Box in their projects.<br />
<br />
== Using the E-box on your own Linux install ==<br />
It is possible to use the E-box on your own Linux installation, should you already have one. However, since much of the software is written without considering portability, mileage may vary on any particular system. This section may be used for common fixes for installing the E-box on your own Linux install.<br />
<br />
=== "Undefined reference" - strict ordering ===<br />
Contrary to earlier versions, in new versions of gcc the order of parameters passed on the command line '''does''' matter. When using the default makefile template for the E-box software, you will find that when building software, there will be complaints about simple functions like 'sin' and 'floor' being undefined references.<br />
<br />
This can be fixed by changing the makefile template, present in /usr/local/MATLAB/R20*/rtw/c/ectarget/ec_unix.tmf. Change this to the following makefile template [[file:Ec_unix.tmf.zip]]. This makefile changes the order of paramters passed to gcc, which will make some basic models build without error.<br />
<br />
== E-Box Programming ==<br />
The user manual for the E-box and other EhterCAT slaves can be found here: [[E-box/User_manual|User manual]]<br />
<br />
The manual for installing a realtime linux system with EtherCAT support can be found here:[[E-box/Install_manual|Install manual]] and [[E-box/Install_manual/FAQ|Frequently Asked Questions]]<br />
<br />
[[Media:EBOX_software_programming.pdf|Programming instructions]]<br />
<br />
[[Media:eclib.pdf|Manual EtherCAT library]]<br />
<br />
<br />
== News / Changes ==<br />
This section contains the important changes and news to the software<br />
<br />
* 2011/11/14 Matlab R2011b is supported from revision (218)<br />
* 2011/08/27 Changed Timer_posix to Timer_posix_AK from revision (142)<br />
* 2011/07/17 SVN is working again<br />
* 2011/07/12 Beckhoff EL3104 will be supported from next revision (106)<br />
<br />
<br />
== References ==<br />
[1] Jeroen de Best and Roel Merry, ''Real-Time Linux for Dummies'', DCT2008.103 [http://www.mate.tue.nl/mate/pdfs/10018.pdf PDF]<br />
<br />
[2] Jeroen de Best and Koen Meessen, ''Realtime Linux'', [[Realtime_Linux]]<br />
<br />
[3] Simple Open EtherCAT Master [http://soem.berlios.de SOEM website]<br />
<br />
<br />
----<br />
----<br />
<br />
= STUDENTS =<br />
E\Boxes are used in some of the courses. In this section, students can find the necessary information regarding to these courses. <br />
<br />
<br />
== Common Information ==<br />
'''The information and manuals presented in this section is used for all courses.''' The steps given below should be followed before conducting the experiments of 4G031-Printer Casus and 4K410-Digital Motion Control. The courses specific information is given in [[#4G031 - Printer Casus | 4G031-Printer Casus]] and [[#4K410 - Digital Motion Control | 4K410-Digital Motion Control]] sections.<br />
<br />
=== Installation of Ubuntu with Wubi ===<br />
'''WARNING: This installation will only work on a normal installation of the Windows Operating System as it depends on the Windows Boot loader. If you use another operating system or if you are unsure please contact one of the course supervisors. If you have a Mac, Windows needs to be present or installed in order to use this tutorial. For more information, see the guide below.''<br />
<br />
Wubi is a linux installer for Windows which can install and uninstall Ubuntu in the same way as any other Windows application in a simple and safe way. Ubuntu will be installed within a file in the Windows system ''ebox_root.disk''. This file is seen by Ubuntu as a real hard disk.<br />
<br />
The installation of Ubuntu used for this course takes up '''30 GB of free-space'''. Therefore, it is recommended to install Ubuntu by connecting to TU/e network via an ethernet cable. Using VPN or wireless is strongly discouraged!<br />
<br />
The wubi-installer can be obtained from the TU/e network share: '''\\ai-stosrv02\EBox'''. Install by opening the executable "wubi_dd-mm-yy-time". Because the installation is quite large this will take some time, especially creating the virtual disk, so please be patient. Connected to gigabit network the installation will take approximately 15 minutes, on a 100 mbits network connection it will take about an hour. <br />
<br />
When you have installed Ubuntu with Wubi, you can start ubuntu by rebooting your computer. A menu will appear during startup which allows you to choose whether to run Windows and Ubuntu. When you choose Ubuntu in this menu, you will go to a new bootloader called GRUB. Here you can choose which version of Ubuntu to run as well as alter the start-up commands for running Ubuntu. Unless you are having problems starting up, you should just select the default version by pressing enter. When prompted for account information use the following:<br />
<br />
#Username: ''ebox''<br />
#Password: ''ebox123''<br />
<br />
You can change the password if desired.<br />
<br />
=== Installing software for experiments ===<br />
<br />
==== Activating Matlab ====<br />
The version of Matlab installed on Ubuntu needs to be activated. To do so, start matlab from a terminal window with <br />
# ''sudo su''<br />
# ''matlab''<br />
and follow the instructions and use the activation key linked on the campus software site. [http://w3.tue.nl/nl/diensten/dienst_ict/services/services_wins/campussoftware/mworks/registratie_en_activatie/ Matlab activatie]<br />
<br />
'''If the student activation key doesn't work, take the employee activation key'''<br />
<br />
==== Update to the latest experiment software ====<br />
The software to perform experiments is already present, but possibly outdated. To update the software and simulink models used, you have to checkout the latest stable revision from the SVN (subversion) server. This can easily be done using the <code>svn_update</code> script:<br />
<br />
# Open a terminal (via the icon on the desktop)<br />
# Type ''svn_update''<br />
#If you get an error mentioning "an unversioned copy of this file already exists", type: "rm *path to file and filename". For example, "rm /usr/Desktop/unversionedfile.m".<br />
<br />
If you are experiencing any problems (bugs/errors) with the experiment software, then first make sure you have checked out and installed the latest software revision from the SVN by repeating steps [[E-box#Update_to_the_latest_experiment_software | Update to the latest experiment software]] and [[#Compiling and installation the software | Compiling and installation the software]]. Since this update will also update the simulink templates used in the experiments, it is recommended to save any changes made to these files with a different filename and/or in a different location, preferably on your harddisk.<br />
<br />
==== Compiling and installation the software ====<br />
First start matlab from a terminal (if not done already):<br />
# Open a terminal<br />
# Type ''sudo su''<br />
#Type "matlab"<br />
<br />
To obtain a fresh copy of the latest version:<br />
# Run svn_update according to subsection [[#Update to the latest experiment software | Update to the latest experiment software]] in a terminal window.<br />
# Change the Matlab current directory to ''/home/ebox/svn/trunk/src/E-box''<br />
# Run ''make_all_clean'' in matlab<br />
# Run ''make_all_install'' in matlab, answer yes to TU/e toolboxes installation question<br />
<br />
=== Preparation prior to performing experiments ===<br />
<br />
==== Connecting the E/BOX, changing ethernet index number ====<br />
Connect the power supply to the E/BOX. Use a network cable to connect your laptop to the In-port of the E/BOX. Since the index number of the Ethernet port you are using can vary for different pc's, the right number has to be set:<br />
# Open a terminal<br />
# Type ''sudo geteth'' (returns the right port number, only when the E/box is connected)<br />
# Start matlab (type ''sudo matlab'')<br />
# Type ''changeeth(x)'' in the matlab command window where x is the port number found with geteth.<br />
<br />
These commands set the right ethernet index number, this needs to be done each time a new software revision is checked out.<br />
<br />
==== Save data ====<br />
To be safe, you should save the data you obtain via experiments outside of Ubuntu. This means that it will not be deleted if Ubuntu is uninstalled. You can save it to your Windows hard disk or use a web service such as Dropbox or Box. The partition of your hard disk on which you have installed Ubuntu can be found under ''\host'', any other partitions can be found in ''\media'' or in the places menu from the taskbar.<br />
<br />
<br />
<br />
----<br />
'''Now, follow the steps specific for [[#Printer Casus - 4G031 | Printer Casus course]] or [[#Digital Motion Control (DMC) - 4K410 | Digital Motion Control course]] ! ! !'''<br />
----<br />
<br />
<br />
<br />
== Printer Casus - 4G031==<br />
The section [[#Performing Real-Time experiments | Performing Real-Time experiments]] includes an introduction to the use of the preparation and the actual execution of experiments. To use a setup, you can subscribe at the appropriate registration lists in SEL3 (1 setup per day per group).<br />
<br />
Almost every day, for several hours assistance available in SEL3, where you can ask your questions. For questions during the experiments, please contact '''Bas van Schepop of your group tutor'''. The project coordinator is Rene van de Molengraft ([[File:Mail_Rene.png]] , tel 2998, GEM-Z -1.141).<br />
<br />
=== Hardware === <br />
The case is a stripped A3 HP printer, see Figure 1. The required hardware for the experimental set-up is included in Table 1. You have to take your own notebook to do experiments.<br />
<br />
[[File:Setup_explained.png|center|thumb|500px|Figure 1]]<br />
<br />
Required hardware<br />
# Notebook with E/box image installed<br />
# Experimental HP printer setup (see Figure 1) including cables<br />
# Amplifier including BNC cable<br />
# E/box including ethernet cable<br />
''Table 1''<br />
<br />
The print head is driven by a DC motor using a belt transmission. The position of the print head is measured by means of a linear encoder. The optical encoder sensor is mounted in the printhead. On the left side an end-switch is mounted that is used to position the printhead initialization. Using the E/box it is possible to perform real-time experiments on the printer, i.e. measuring and sending signals in real-time. The amplifier provides the necessary current gain of the control signal transmitted from the E/box to the motor is controlled.<br />
<br />
The cables of the hardware should be mounted as shown in Table 2 below. Furthermore, the amplifier and E/boxof supply cables must be provided. CAUTION: NEVER turn the amplifier on (ON) before a control signal is defined by the ectarget software (see [[#Performing Real-Time experiments]]). A crash of the printer may result. On some older amps is still 0 to 2.5 V instead of + / - 2.5 V. This is incorrect, all amplifiers have a range of + / - 2.5 V.<br />
<br />
''Table 2''<br />
{| width="50%" border="1" cellspacing="0" cellpadding="2"<br />
! align="left"|I/O Printer<br />
! align="left"|I/O Amplifier <br />
! align="left"|I/O E/box <br />
! align="left"|I/O Notebook<br />
|-<br />
| Motor input || LOAD || || <br />
|-<br />
| Encoder output || || Encoder 1 || <br />
|-<br />
| End-switch I/O || || DIGITAL I/O ||<br />
|-<br />
| || +/ − 2, 5 V in || Analog out 1 ||<br />
|-<br />
| || || Ethernet port || Ethernet port<br />
|}<br />
<br />
<br />
=== Performing Real-Time experiments ===<br />
<br />
The simulink template for the experiments can by typing ''printer01'' in the matlab consolue. Make sure you save any changes to the model with another file name and/or in another location or you might lose your work when updating the svn or reinstalling Ubuntu.<br />
<br />
==== External mode ==== <br />
The following steps are have to be followed to execute the experiment:<br />
# Open the simulink file and press "Ctrl-B" to start building the real-time code<br />
#If you get any errors, check to see if there is a Ref3 block present. If there is, you first need to open it and accept a trajectory before you can continue.<br />
# Switch on the printer setup<br />
# Open a terminal<br />
# type ''sudo su''<br />
# Go to the folder where you just built the file (the file is built to the current directory of Matlab). Use ''cd'' to specify the path.<br />
# Note: If there is a ref3 block present (yellow), you must first give it a path to follow and accept , otherwise it will throw an error.<br />
# Type ''./printer01 -w'' in the terminal to execute your experiment. The ''-w'' part means the realtime application will be run in external mode. If you have renamed the model, ''printer01'' in the previous commands has to be replaced by the name of your Simulink model.<br />
# The external mode requires you to connect the Simulink model to the real-time application and start it manually from the Simulink. This has to be done by choosing "Connect to Target" and "Start Real-time Code" from the Simulation menu respectively.<br />
#If you get an error, especially one that mentions the "eth bind failed", go to Simulation -> Configurations Parameters -> Code generation and manually change the ID to the one you got from the command "geteth".<br />
# The standard simulation time is 30 seconds, to change this, in Simulink go to "Simulations", "Configuration Parameters" and change the "Stop time".<br />
# After the experiment has finished you can return to Matlab, load your data ("load printer01.mat") and perform the required actions to post-process the measurement data. Save all your commands in a Matlab m-file. Created variables have the prefix ''rt_''<br />
# The ''-w'' option can be omitted to let the real-time application run in stand alone mode.<br />
<br />
==== Usefull tips ==== <br />
<br />
To get an idea of the system behavior of the open-loop system. One of more suitable input signal for the motor can be chosen, while recording the resulting output of the system.<br />
<br />
* By applying a sinusiodal input signal (and linear system behavior) it is possible to determine a point of the Bodediagram. With several of these experimenten, a complete Bodediagram can be constructed.<br />
* With simple input signal, e.g. constant for a short period, then zero, gives relevant information about direction of motion, amount of friction, etc.<br />
<br />
For more information regarding system identification, the following book can be read: Feedback Control of Dynamic Systems, Franklin, Powell.<br />
<br />
Take an extra phaselag in the open-loop into account, which has the order of magnitude of <code>ωT</code>, where ''T'' is the sample time in <code>[s]</code>, equal to the ''Fixed step size'' from the simulation parameters. Futhermore, <code>ω</code> is the frequency in <code>[rad/s]</code>. Increasing the open-loop gain will eventually always result in instable behevior.<br />
<br />
Look at [http://cstwiki.wtb.tue.nl/index.php?title=Printer_Casus_Ebox#Frequently_Asked_Questions Frequently Asked Questions] if there are any questions. This wiki will be updated during the casus on a regular basis.<br />
<br />
==== Troubleshooting ==== <br />
<br />
'''''error:'''''<br />
<br />
Invalid setting for fixed-step size (0.001) in model 'printer01'. All sample times in your model must be an integer multiple of the fixed-step size.<br />
<br />
The sample time period (0.00048828125) of 'printer01/Generated S-Function' is not an integer multiple of the fixed step size (0.001) specified for model.<br />
<br />
'''solution:'''<br />
<br />
* press Ctrl+e<br />
<br />
* select: 'solver'<br />
<br />
* change 'Fixed-step size (fundamental sample time):' under 'Solve options' to value "1/2048"<br />
<br />
<br />
'''''error:'''''<br />
<br />
Error(s) encountered while building model "printer01"<br />
<br />
'''solution:''' choose a different name for the model. Using "printer01.mdl" can cause problems because the file can be shadowed by the original "printer01.mdl" in a folder higher on the Matlab path.<br />
<br />
<br />
== Digital Motion Control (DMC) - 4K410 ==<br />
<br />
=== Performing Real-Time experiments ===<br />
<br />
The simulink template for the experiments can be found in the folder ''Templates\Specific\pato'' inside the ''E/BOX'' folder (see [[#Compiling and installation the software | Compiling and installation the software]]). Make sure you save any changes to the model with another file name and/or in another location or you might lose your work when updating the svn or reinstalling Ubuntu.<br />
<br />
==== External mode ====<br />
The following steps are have to be followed to execute the experiment:<br />
# Open the simulink file and press "Ctrl-B" to start building the real-time code<br />
#If you get any errors, check to see if there is a Ref3 block present. If there is, you first need to open it and accept a trajectory before you can continue.<br />
# Switch on the pato setup<br />
# Open a terminal<br />
# type ''sudo su''<br />
# Go to the folder where you just built the file (the file is built to the current directory of Matlab). Use cd to specify the path.<br />
# Type ''./pato01 -w'' in the terminal to execute your experiment. The ''-w'' part means the realtime application will be run in external mode. If you have renamed the model, ''pato01'' in the previous commands has to be replaced by the name of your Simulink model.<br />
# The external mode requires you to connect the Simulink model to the real-time application and start it manually from the Simulink. This has to be done by choosing "Connect to Target" and "Start Real-time Code" from the Simulation menu respectively.<br />
#If you get an error, especially one that mentions the "eth bind failed", go to Simulation -> Configurations Parameters -> Code generation and manually change the ID to the one you got from the command "geteth"<br />
# The standard simulation time is 30 seconds, to change this, in Simulink go to "Simulations", "Configuration Parameters" and change the "Stop time".<br />
# After the experiment has finished you can return to Matlab, load your data ("load pato01.mat") and perform the required actions to post-process the measurement data. Save all your commands in a Matlab m-file. Created variables have the prefix ''-rt''<br />
# The ''-w'' option can be omitted to let the real-time application run in stand alone mode.<br />
<br />
==== Qadscope ====<br />
Qadscope is a program to generate outputs and read the input ports of the E/BOX, it can be seen as a digital oscilloscope and spectrum analyzer. A few guidelines:<br />
* Qadscope can be started by typing qs in the Matlab prompt.<br />
* Signals can be generated by the signal generator.<br />
* Every input- and output port can be set on and off separately (make sure the right ports are enabled and disabled).<br />
* When relations between signals, for example a FRF, are computed, the red labeled signal is the input signal and the green labeled input is the output signal.<br />
* The framelength and sample frequency can be adjusted manually.<br />
* When you are done using Qadscope you can just shut it down. All data is available in the Matlab workspace in a structure named "data".<br />
* Always stop a measurement first before saving the figure.<br />
<br />
====EC_target settings====<br />
<br />
If you have made you own simulink file (without starting from the template), the function ectarget settings can be used to properly set all simulation parameters. A help file is included. Usage: ectarget settings(filename,netif,ts), with netif equal to the ethernet port number and ts equal to the desired sample time.<br />
<br />
----<br />
<br />
== Frequently Asked Questions ==<br />
<br />
; Ubuntu start niet<br />
: Boot onder windows en controleer de grootte van de ''ebox_root.disk'' file. Deze moet 31.188.844.544 bytes groot zijn (zie properties/eigenschappen). Is dit niet het geval, dan kun je de ''ebox_root.disk'' handmatig kopieren van de server.<br />
<br />
; Tijdens compilen krijg je een error of een conflict in sampling tijden (0.001 en 1/2048)<br />
: Update en installeer de software (vanaf r355 is dit opgelost) [[Printer_Casus_Ebox/Experiments_manual#Update_to_the_latest_experiment_software|Update to the latest software]] en [[Printer_Casus_Ebox/Experiments_manual#Compiling and installation the software|Compiling and installation the software]]. Start vervolgens vanaf een nieuwe kopie van het ''printer01'' model<br />
<br />
; Tijdens compilen krijg ik errors met "undefined function or variable ref_part" <br />
: Dubbelklik op het ''ref3'' blok en klik op ''accept'' voor het builden<br />
<br />
; ''svn_update'' werk niet<br />
: Voer het volgende commando uit in een terminal<br />
<nowiki>svn checkout https://e-box.wtb.tue.nl/svn/e-box/trunk/src/E-box/ /home/ebox/svn/trunk/src/E-box/</nowiki><br />
<br />
; In de terminal waar de executable draait komt repeterend<br />
Error occurred getting packet header.<br />
Error occurred in rt_PktServerWork.<br />
Disconnecting from host!<br />
<br><br />
Error occurred getting packet header.<br />
Error occurred in rt_PktServerWork.<br />
Disconnecting from host!<br />
:te staan. Dit is een bekende Linux gerelateerde bug die meestal op te lossen is met een reboot van de pc, of door een andere pc gebruiken.<br />
<br />
; De scope in Simulink laat alleen het laatste stuk meetdata zien<br />
: Gebruik "To Workspace" blokken. Voor een model met naam ''printer_01.mdl'' wordt er een ''printer_01.mat'' file aangemaakt. Daarin staan de gemeten signalen, bijvoorbeeld ''rt_simout''. De naam van het signaal ''simout'' kan je wijzigen, dubbelklik op het "To workspace" blok!<br />
<br />
=== Killing a process in Linux===<br />
If your laptop or an application (like Matlab) stops reacting you can try the following to kill the process giving problems:<br />
<br />
'''Method 1'''<br />
<br />
* Type ''sudo su'' (superuser) to get the necessary rights.<br />
* Type ''xkill'', your mouse pointer will change into a cross from an arrow.<br />
* Click on the window you want to close.<br />
<br />
'''Method 2'''<br />
<br />
* Open a new terminal.<br />
* Type ''sudo su'' (superuser) to get the necessary rights.<br />
* Type ''ps -aux'' or ''ps -e'' (get a list of all processes and ID). You can ao use the command top to get a list of the processes using the most CPU time.<br />
* Find the ID of the process you want to kill.<br />
* Type ''kill -9 <process id>''<br />
* Alternatively if you know the process name you can use the pkill command so to kill matlab use ''pkill matlab''<br />
<br />
=== How to reboot cleanly even when the keyboard/mouse are frozen ===<br />
There are normally several ways to reboot cleanly using key combinations such as:<br />
* ''CTRL + ALT+ F2'' (get to a terminal, you can then run top/kill/pkill to discover and kill the offending process)<br />
* ''CTRL + ALT + Backspace'' (kills the graphic session and goes to a console, all graphical applications are terminated too)<br />
* ''CTRL + ALT + DEL'' (reboot)<br />
* If you appear to have no control of the keyboard ''ALT+ SYSRQ + R'' can help to regain control of it.<br />
* ''ALT+ SYSRQ + R-E-I-S-U-B'' (forces a clean reboot even when the keyboard is not responding)<br />
<br />
The last one is in fact a sequence of commands, so to use it, hold down ''ALT+SYSRQ'' and then press the keys ''R-E-I-S-U-B'' in sequence with a couple of seconds between each of the commands.<br />
<br />
== Installing Ubuntu on a Mac==<br />
<br />
Ubuntu can be installed on a Mac in multiple ways, but the easiest way to use the Ebox image is to first install Windows, if it is not already present. You will need a Windows 7 ISO or DVD, and a usb drive of at least 4 GB. Since the Ebox installation itself is 30GB, the minimum size of your WIndows 7 installation is to be about 50GB. If you have over 4 GB of memory installed, adding twice the size of your ram to this amount will make you able to skips certain steps in Step 2.<br />
<br />
A Microsoft Windows 7 [http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65804/X15-65804.iso image] can be downloaded for free from Microsoft, and used up to 180 days. While not too difficult, an installation could always go wrong. Proceed at your own risk.<br />
<br />
=== Step 1: Installing Windows ===<br />
<br />
* Open the Bootcamp Assistant, found in /Applications/Utilities/.<br />
* You will see the following screen if you own a Mac without a DVD drive. If you do own a Mac with a DVD drive, you'll first need to burn the Windows 7 image to a blank DVD. A tutorial can be found, for example, [http://hints.macworld.com/article.php?story=20060619181010389 here].<br />
[[File:Macs1.png|600x600px]]<br />
*Insert the USB stick, and click continue. You can now point the Assistant to the ISO file in the following screen. Click continue.<br />
[[File:Macs2.png|600x600px]]<br />
*When done, Bootcamp Assistant will warn you to make a backup of your Mac before you attempt to install WIndows. This is always a good precaution, but particularly due to the need to format the partition on which Windows will be installed. It is possible to select the wrong disk here, erasing all data on your Mac.<br />
[[File:Macs4.png|600x600px]]<br />
*Restart your Mac. If the Windows installation does not start automatically, hold the alt key after the grey screen you see when your Mac turns on.<br />
*Follow the instructions until you see the following screen.<br />
[[File:Macs5.png|600x600px]]<br />
*Make sure to select the partition marked '''BOOTCAMP''', then hit "Drive Options (advanced)".<br />
[[File:Macs6.png|600x600px]]<br />
*In the changed screen, hit "Format".<br />
*Proceed with the installation. Windows will restart your computer several times, and your Mac should be configured to start up to Windows automatically. If not, use the alt key option mentioned above. When WIndows is installed, find the Bootcamp tools on the USB stick and install them. This will create an icon in the Windows system tray, which can be used to make Mac the default operating system again. In order to do this, after following the rest of the steps in this tutorial, click on it and hit "Restart to Mac OS X".<br />
<br />
=== Step 2: Installing Ubuntu ===<br />
Follow the installation guide above to install Ubuntu, if there is not enough free space available, and your Windows disk is at least 50 GB, you can follow the following two tutorials.<br />
<br />
*[http://www.pcmtechhelp.com/2011/03/14/disable-the-paging-file-on-high-memory-ram-systems-faster-7-part-24/ Disable] your page file, which saves the size of your memory on disk.<br />
*[http://helpdeskgeek.com/windows-7/windows-7-delete-hibernation-file-hiberfil-sys/ Disable] your hibernation file, a file created when your Mac goes to sleep with Windows running. This also reduces your available filesize by the amount of memory you have, which causes considerable trouble on 16GB memory systems.<br />
<br />
===Step 3: Booting Ubuntu and installing additional drivers ===<br />
<br />
If your Mac is extremely compatible with Ubuntu, you will be able to skip this step. To find out, reboot your Mac while holding the alt key, and select WIndows. You will see a screen asking you to choose either Windows or Ubuntu, now choose Ubuntu. If all goes well Ubuntu will complete it's installation and you will be greeted by a pink screen asking you to login.<br />
<br />
When the startup sequence hangs on a Mac, the most likely reason is an incompatible video card driver, though it could be any number of reasons. In order to troubleshoot this, first try disabling the standard video card driver.<br />
<br />
*Boot your Mac the same way again until you see a screen marked "GNU Grub version ...". Depending on how your installation went, you will see different options here including, but not limited to, several versions of Windows and Ubuntu.<br />
*Select the top Ubuntu option (which should already be selected), and hit "e". You will see something akin to the following screen. (choice of lamp not my own)<br />
[[File:Macs7.JPG|600x600px]]<br />
*Find the line that says "quiet splash", and add "xforcevesa". If the line isn't present, find the end of the line before "initrd" and put it there.<br />
*Hit "ctrl" and "x" together to boot Ubuntu. If this does not work, repeat the same steps with "acpi=off and vga=824".<br />
<br />
==Video card driver==<br />
After booting with the aforementioned commands, Ubuntu should boot with a software emulation of the graphics card. While compatible, this does not provide a responsive user interface. By installing closed source drivers, the potential of most videocards can be unlocked.<br />
<br />
*Go to System ->Administration ->Hardware drivers. You will see a screen similar to the one below, where you can install custom drivers. Try the one at a time, so if one causes Ubuntu to not boot anymore, you will know which one is the problem.<br />
*Choose one of the videocard drivers, preferably the most recent one, and hit install. <br />
<br />
==Wifi drivers==<br />
<br />
Most Macs come with a Broadcom wifi chip, which can also be installed via the same "Hardware Drivers" screen as the video card driver above. On some laptops this has been known to cause problems, and a different version needs to be installed via the terminal. An example is given below for the Broadcom BCM43 chip, which can be found for example on the 2008 Unibody and 2012 Retina Macbook Pro's.<br />
<br />
*Open the Terminal (Applications -> Accessories -> Terminal).<br />
*Copy paste "sudo apt-get install b43-fwcutter" into the prompt. <br />
*Hit yes on all following prompts.<br />
<br />
If this does not work, you may need to try and find additional commands on the internet. Different chips need different solutions, or yours may just work out of the box. In any case, wired ethernet usually works by default.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=File:Ec_unix.tmf.zip&diff=28079File:Ec unix.tmf.zip2016-06-09T14:47:51Z<p>S119349: Modified makefile template for the Ethercat E-box software, to be compatible with more modern gcc versions.</p>
<hr />
<div>Modified makefile template for the Ethercat E-box software, to be compatible with more modern gcc versions.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=21068Embedded Motion Control 2015 Group 3/Scan2015-06-26T18:53:37Z<p>S119349: /* Scan */</p>
<hr />
<div>= Scan = <br />
<br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, divided in approximately a thousand increments.<br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since there are more than one options at intersections, there has to be an extra element to send the robot in the appropriate direction. This is done by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive these virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of safety is provided by the potential field algorithm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities, since if Pico bumps into the wall, the attempt of solving the maze has failed. Another safety layer has been implemented to prevent the robot from hitting walls or corners. The distance to the walls is continuously measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction. Below a simulation can be seen for collision avoidance.<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [[Embedded_Motion_Control_2015_Group_3/Mapping| Mapping]].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimeter, in that case there is a corridor. By using this simple but very effective method, left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and dividing them by the number of beams. If this differs more than a set value from the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the width of the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor. Below a simulation can be seen for handling an open space.<br />
<br />
[[File:open_space.gif|200px|frame|left|Pico handling open space]]<br />
[[File:collision.gif|200px|frame|center|Pico using collision avoidance]]<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls are constructed, by which potential corridors are blocked. This lead PICO in the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius. Therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider a crossroad shown in the picture below. The left plot shows what PICO sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go in. By slightly modifying the data, the actual vision as seen in the simulator can be constructed, as shown in the figures below. In the figure on the left two minima are shown that represent the far corners between the three maxima. These provide PICO with reference points from where the virtual walls are constructed. Depending on the direction of the desired turn, the corner is used as a reference point for computing the radius at which the virtual walls are placed. In the right figure the actual constructed walls can be seen. The corridor in front and the left corridor are blocked, so due to blocking of these path ways, the potential field will lead PICO to the desired right direction.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined. The figure below shows what PICO sees in this case. The figure shows two maxima with a minimum inbetween those maxima. These two maxima are used as bounds for finding the minimum. When the robot turns, it is hard to keep a reference point. Therefore finding this minimum is a good method of finding the reference point, which is needed to construct the virtual walls. These actual constructed wall can also be seen.<br />
<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction, the situation is slightly different. In this case, using the above method will not result in a minimum, that represents a corner. However locating this minimum is useful, depending on the kind of turn (100+n[minimum] or 100-n[minimum]), a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
Since a door is basically a dead end which can be opened, PICO should search for dead ends. These have a specific profile where always three walls are visible and connected. Since the maze is axis aligned, two of these walls will be parallel. The other wall is positioned perpendicular to the parallel. These dead ends can be found at the end of a corridor and in junctions longer than 0.3 meters. This means PICO has to detect dead ends/doors in front and at the sides. Therefore PICO will search in three regions in its LRF-data for doors. All regions use the same methods for detecting doors. This method is based on the profile of a dead end and PICO tries to recognize dead ends by searching its LRF-data with some conditions. PICO will search for the two corner points where the walls meet. In between these corner points, the dead end/door can be detected. <br />
See the figure below, the points which PICO must detect are highlighted by a red cross. PICO will first start by searching the left corner point [1] within a certain range. If this point can be found, pico will search for the wall or door [2] which should be right of this point. If this point can be found, PICO will search for the last point which is the right corner point [3]. PICO will only look for the next point if the last one is found. <br />
<br />
To prevent PICO from wrongly detecting doors, two extra conditions to qualify as a door/dead end are added. When the points are found, the distance to these points and the angles are known. This makes it possible to determine whether all points are aligned. This is done by calculating the distance of all points according to figure 3 as "distance to pico". All three lenghts should be approximately equal. <br />
The second condition is that the length of the door has to be within the bound given in the assignment. Thus: 0.5 < door length < 1.5. If all conditions are met, this area will qualify as a daed end/door. If the distance to PICO has become small enough, PICO has reached the door area.<br />
During the experiments we noticed that doors could still be wrongly detected due to measurement noise, or if the walls are placed a little askew. Therefore the last fix was introduced. PICO has to detect dead ends/doors ten times in a row before it actually qualifies as a real dead end/door. This is done in order to eliminate measurement noise/faults. <br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]<br />
<br />
If PICO has detected a dead end/door and it is positioned in the door area, PICO will immediately stop and send the door request. After this, PICO will stand still for six seconds to wait if the doors opens. After this waiting time, PICO will look for one more second whether it is still in the door area. If PICO is still in the door area, this means that nothing has changed and the door has not opened. It will qualify these results as a regular dead end and turn around. If PICO is no longer in the door area, PICO concludes that the door has opened and PICO will continue on its path. Since this path was directed at the door, PICO will go through the door and continue the search for the exit of the maze.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation&diff=20568Embedded Motion Control 2015 Group 3/Localisation2015-06-26T12:44:43Z<p>S119349: </p>
<hr />
<div>= Localisation = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
This page goes into details about localisation.<br />
<br />
== Requirements of Localisation ==<br />
<br />
In order to be able to locate itself within its environment, the robot needs information. The required data to obtain global position are:<br />
# global x-position [m]<br />
# global y-position [m]<br />
# global a-angle [rad.]<br />
<br />
The error in the position data must be quantified and must be minimized, in order not to make mistakes in the location in the long run. For example, if the robots x-and y-coordinates differ due to an error, the robot will think is it at a different location, whereas it actually is standing still in exactly the same location and position.<br />
<br />
The sensor data required to obtain the above mentioned position data are the following:<br />
# odometry: global x [m] , global y [m] , global a [rad.]<br />
# LRF: all laser ranges [m]<br />
# velocity input to robot<br />
<br />
== Method of localisation ==<br />
The robot will need global coordinates. There are two sensors which it can use to determine these coordinates. However both sensors have their own drawbacks. <br />
*The odometry sensor provides global x-y coordinates and angle. There is not much variance in the data of the sensor, but there is a drift (bias) that will accumulate over time. The odometry data can be viewed as feedforward information for the system.<br />
*The LRF sensor provides 1000 ranges [m] with distances to objects over a scope of 270 degreess around the robot. This sensor shows no bias, but has a variance however. Furthermore the LRF data does not provide the global coordinates that we want with its raw data. Therefore this data has to be converted into position changes in terms of the global coordinates. The LRF data can be seen as the Feedback loop of the system.<br />
<br />
When the robot starts its program initially, the global coordinates will be all zero. So the start position of the robot determines the direction of the x- and y-axes. The data from the odometry and the LRF will be updated at each time instant. The odometry just works very intuitively: it tells you how far you have moved based on wheel-rotation. In the case of LRF however, the following happens: It measures the distances to the objects in the environment at time t0. It measures again at t1. The difference in distance, converted to the wanted coordinates, should be equal to the odometry data. Of course this will not happen due to the errors in the sensors, but that is why a filter is used to filter the data in between each next update step.<br />
<br />
<br />
A '''Kalman filter''' is used to filter the data obtained from odometry as well as from LRF, in order to maximize the accuracy.<br />
<br />
=== Kalman filter ===<br />
The kalman filter uses an update cycle with two steps. In the first step the new position is estimated based on the previous position and the input. An estimate of its error is then made which is used in the second step. In the second step data from measurements is used to correct the estimated position. Since the definition of the directions of the x and y axis is arbitrary, they are aligned to the corridor in which pico starts. The algorithm that is used is shown in the figure below:<br />
<br />
[[File:scheme_Kalman.gif|900px|center|thumb|Scheme Kalman]]<br />
The various variables used in the figure above, are explained here:<br />
<br />
<math>\hat{x}_k^- </math> is the predicted ahead state variable at discrete time instance <math>k</math>. This column vector consists of the global x-position, global y-position and the global angle. So logically, <math>\hat{x}_{k-1}^- </math> is the same vector at a previous time instance.<br />
<br />
<math>A</math> is an n by n matrix that relates the state at the previous time step <math> k-1 </math> to the state at the current step <math> k </math>, in the absence of either a driving function or process noise.<br />
<br />
<math>B</math> is an n by n matrix that relates the optional control input u to the state variable.<br />
<br />
<math>u_{k-1} </math> is the control input at the previous state of time. So this corresponds to a 3 by 1 column vector containing the velocities that were sent to the wheel base:<br />
# vx: translational velocity in x-direction [m/s]<br />
# vy: translational velocity in y-direction [m/s]<br />
# va: angular velocity around z axis [rad./s]<br />
<br />
<math> P_k^-</math> is a n by n matrix containing the error covariance predicted ahead at time instance <math>k</math>.<br />
<br />
<math> Q</math> is a n by n matrix containing covariance of the process noise.<br />
<br />
<math> K_k</math> is an n by n matrix that represents the Kalman gain.<br />
<br />
<br />
<math> H</math> is an n by m matrix that relates the state to the measurement <math>z_k</math>.<br />
<br />
<math> R</math> is an n by n matrix that contains the measurement noise covariance.<br />
<br />
<math> z_k</math> is the measured data in a column vector (to be compared to predictions).<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
During the second step of the kalman update both the LRF and odometry are used. For both sensors the difference between the current and last value is used to determine the position change since the last update. This value is then added to the previous positions from the kalman update. The odometry data can be used directly. For the LRF however, the x, y and a values first have to be calculated from the raw LRF data. This is done by measuring the distance to the end of the corridors. Since Pico can see 270 degrees around itself, it can always measure the distance to one end of the corridor it is in as wel as the distance to one of the side walls of the corridor. <br />
<br />
[[File:LRF_KALMAN.png|200px|center|thumb|LRF Kalman]]<br />
<br />
The estimated angle is used to calculate which sensor should point towards the end of the corridor. An interval around the corresponding LRF beam is searched for a local minimum, which should belong to the beam that hits the end perpendicularly. This beam points directly at the end of corridor and is then used to calculate the LRF value for the angle of pico. The difference is calculated between the previous and current distance to the end wall which is the position change for either x or y used in the kalman update. The other position change is calculated in similarly, but in stead of the end of the corridor the distance to the side wall is used.<br />
Since it is possible to lose sight of a wall, for instance when driving on an intersection, a safeguard is put in place. If the position change based on the lrf is to big, it is assumed that the LRF data is unreliable for that update cycle and only the odometry data is used. This is done by switching between two R matrices, one of which sets the contributions of the laserdata to zero. In the regular R matrix the contribution of the LRF data is weighed more heavily under the assumption that the LRF is more reliable overall.<br />
<br />
== Implementation of method ==<br />
Here the implementation of the method will be briefly discussed.<br />
===Interface===<br />
Localisation is a separate cpp-file which can be seen as a block. It uses the Scan class data to read sensor data. Also it uses the Drive class to retreive information on the velocity. the variables that are used as arguments are copies and are not used by reference, as they are only needed to calculate the position of the robot. The variables that are only used within the localisation class are ofcourse set to private status. After all necessary blocks are defined in the header file localisation.h , accompanied with the function prototypes, the file localisation.cpp can be properly used. The file consists of 6 functions in total:<br />
* Initializing initial position<br />
* Retreiving LRF-data and converting this into useful X,Y,A data, stored to be used in another function<br />
* Retreiving ODO-data and storing this to be used in another function<br />
* Composing the Z column-vector used in the Kalman algorithm<br />
* Composing the B matrix that is used in the Kalman algorithm<br />
* Function containing the actual Kalman algorithm, which uses the information retreived in the above function to give an accurate localisation<br />
The resulting X,Y,A coordinates are made available to the mapping block for further use. There this info can be used to pinpoint and save nodes, as well as recognizing whether the robot has visited these coordinates before and much more. See the mapping section for further details on this block.<br />
<br />
=== Function 1: initialize position: SetX0()===<br />
In this function the position is determined by means of a for-loop that loops over all of the laser ranges. It determines the minimum range by means of an if-statement. If the minimum range is found, it is stored along with its beam number. This direction is set to correspond to the global Y-direction, and the global angle A is calculated as well, using the beam number. Now the global X-axis is set by using the global Angle. These values are all stored and X,Y,A are initialized.<br />
<br />
=== Function 2: LRF-retreival and conversion: LrfLocalisation()===<br />
This function uses the range data to extrapolate the position of the robot. <br />
First the angular position A is evaluated. If this angle is larger than pi, or smaller than minus pi, it will be wrapped by subtracting or adding 2 pi respectively.<br />
This wrapped angle is used to determine the beam numbers of the current minimal ranges. This is simply done using the following relation:<br />
<br />
n_x = X[2]+robot.angle_max / robot.angle_increment;<br />
In this equation n_x corresponds to the beam number of the global X-axis and position. X[2] is the global angle A and the other parameters are self explanatory. The range belonging to this beam number is stored. The same procedure is used to determine the Y-data.<br />
<br />
In order to create robustness and in order to prevent problems in other function calls, there are some if-statements included to switch axes if necessary. If the beam number n_x is smaller than 4 or bigger than 994, it is one of the outer ranges. If one of these if-statements is true, the beam number n_x will be changed. We switch from the positive to negative axis or the other way around, by adding pi to the angle. This way, the axis can not get out of sight of the robot. Simple boolean expressions are used to notify the system whether the positive or negative axis is being tracked. This is important because it changes whether distances should be counted negative or positive. This system using axis-switches is incorporated, because there is no assumption made regarding initial position. It should be able to determine an axis to track, regardless of the initial angle of the robot in the maze.<br />
<br />
Next the values of the ranges are checked between n_x-5 and n_x+5 using a for-loop. If a minimum range is found it is stored. However, if the minimum value lies on the outer bounds of the loop, or if the difference is too small (noise) , then no minimum is found. This is done for both the x- and y-direction.<br />
<br />
Finally, using the laser distance in x- and y-direction, the difference is range is calculated with the previous function call. These differences represent the movements made between function calls, and therefore the difference in position. Every function call, the differences in each direction are added to the total of differences calculated in previous calls. These variables therefore represent the global X,Y and A position.<br />
<br />
deltaXLFR = min_x_value*pow(-1,neg_x_axis) - oldXLFR; // Calculate change in X by LRF<br />
deltaYLFR = min_y_value*pow(-1,neg_y_axis) - oldYLFR; // Calculate change in Y by LRF<br />
deltaALFR = CorrectedAngleA - oldALFR; // Calculate change in angle using LRF from previous program call<br />
oldXLFR = min_x_value*pow(-1,neg_x_axis); // Calculate the new oldXLFR<br />
oldYLFR = min_y_value*pow(-1,neg_y_axis); // Calculate the new oldYLFR<br />
oldALFR = CorrectedAngleA; // Calculate the new oldALFR<br />
<br />
=== Function 3: ODO-retreival: ODOLocalisation()===<br />
This function creates an odometry object and reads the odometry data from the io-layer. It starts with an initial X=0,Y=0,A=A_init . It calculates the differences each time the function is called. So if the x position is 5 meter, and the next function call it reads 5.2 meter, then the difference equals the recent reading minus the old reading. The differences are stored, as well as the old position and the sum of the differences (actual position). This info is to be used by the other functions that need it in the file.<br />
<br />
<br />
void Localisation::ODOLocalisation(){<br />
emc::OdometryData odom; // Creating odometry data object to retreive odo-info<br />
deltaXODO = odom.x - oldXODO; // Calculating change in x-position using odo<br />
deltaYODO = odom.y - oldYODO; // Calculating change in y-position using odo<br />
deltaAODO = odom.a - oldAODO; // Calculating change in a-position using odo<br />
oldXODO = odom.x; // Setting retreived odo global x-position info to oldXODO<br />
oldYODO = odom.y; // Setting retreived odo global x-position info to oldXODO<br />
oldAODO = odom.a; // Setting retreived odo global x-position info to oldXODO<br />
}<br />
<br />
=== Function 4: Making Z: MakeZ()===<br />
<br />
Z is a 6-entry column vector. Every time it is called, it will update the entries as follows: <br />
The first three entries (corresponding to X_lrf,Y_lrf and A_lrf respectively) equal their global position plus the difference in position calculated in the LRF retreival function.<br />
The latter three entries correspond in the same manner to their odometry counterparts. <br />
These second X_odo,Y_odo and A_odo are updated in the same manner, but using the difference in odometry instead of LRF. <br />
The odometry differences are calculated in function 3. <br />
Now the Z-column vector is ready to be used by the Kalman filter.<br />
<br />
<br />
<br />
<br />
void Localisation::MakeZ(){<br />
z[0] = X[0] + deltaXLFR; //Lrf set of x,y,a<br />
z[1] = X[1] + deltaYLFR;<br />
z[2] = X[2] + deltaALFR;<br />
z[3] = X[0] + deltaXODO; //Lrf set of x,y,a<br />
z[4] = X[1] + deltaYODO;<br />
z[5] = X[2] + deltaAODO;<br />
}<br />
<br />
=== Function 5: Making B: MakeB()===<br />
<br />
This function simply constructs the 3x3 matrix B that is used in the Kalman algorithm. The first two rows concern the X-and Y-position. Therefore the first row consists of the cos([X[2]) and the sin(X[2]) and zero respectively.<br />
Ofcourse the mirror is true for the Y-position. So the second row consists of sin([X[2]) and the cos(X[2]) and zero respectively.<br />
The third row contains zero entries for the first two columns, and a value of 1 for the angular entry.<br />
<br />
<br />
<br />
void Localisation::MakeB(){<br />
B.at<double>(0,0) = cos (X[2]);<br />
B.at<double>(0,1) = sin (X[2]);<br />
B.at<double>(1,0) = sin (X[2]);<br />
B.at<double>(1,1) = cos (X[2]);<br />
B.at<double>(2,2) = 1;<br />
B = B/30;<br />
}<br />
<br />
=== Function 6: Kalman filter: UpdateKalman()===<br />
<br />
A short explanation on this function:<br />
First it is checked whether X0 is set. If this is true, Lrflocalisation (function 2) is called, as well as ODOlocalisation (function 3).<br />
No we have to sets of X,Y and A position, each with its on accuracy (odo drift, lrf variance).<br />
Function 4 and 5 are called next, in order to construct Z and B.<br />
If no minimum ranges were found, the R matrix is switched from R1 to R2 (only odometry data to be taken). If however, a minimum has been found, than R1 is used (both odometry and lrf info).<br />
Next the U vector is constructed, consisting of the velocity components in X,Y and A direction. <br />
The ahead projection of position is done using this data (X). If that has been done, the projected error covariance matrix P is calculated. The Kalman gain is calculated and finally the estimates are updated with the measurements, by which a filtered position is presented. <br />
<br />
If X0 was not set, the function SetX0() is called again, until X0 is set.<br />
The figure below shows the structure of the algorithm far more clearly.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
void Localisation::UpdateKalman(ScanVariables robot){<br />
if(X0set){ // If-statement :If X0set is true<br />
LrfLocalisation(robot); // If-statement :Calling Lrflocalisation function to get x , y, a from lrf<br />
ODOLocalisation(); // If-statement :Calling ODOlocalisation function to get x , y, a from lrf<br />
MakeZ(); // If-statement :Calling MakeZ() function to putx , y, a from lrf and odo in matrix<br />
MakeB(); // If-statement :Calling MakeB() function to putx , y, a from lrf and odo in matrix<br />
if (!min_y_found || !min_x_found){<br />
R=R2;<br />
}<br />
else {<br />
R=R1; <br />
}<br />
U = Vec3d(drive.vx,drive.vy,drive.va); // If-statement :Creaing matrix U, containing vx ,vy,va<br />
X = Mat((A * Mat(X)) + B * Mat(U)); // If-statement :Project X[] state containing global position ahead using prediction<br />
P = A * P * A_transpose + Q; // If-statement :Project the error covariance matrix ahead <br />
if(X[2] > M_PI)<br />
{<br />
X[2] = X[2] - 2*M_PI;<br />
}<br />
else if (X[2] < -M_PI)<br />
{<br />
X[2] = X[2] + 2*M_PI;<br />
}<br />
Mat matrixToInverse = H * P * H_transpose + R; // If-statement :Preparing matrix to be inverted for usage of calculation Kalman gain<br />
inverseMatrix = matrixToInverse.inv(); // If-statement :Actually inverting this matrix<br />
Mat K = P * H_transpose * ( inverseMatrix); // If-statement :Calculating the Kalman gain<br />
X = Mat(Mat(X) + K * (Mat(z) - H * Mat(X))) ; // If-statement :Updating estimate with measurement Z<br />
P = (Mat::eye(3,3,CV_64F) - K * H) * P ; // If-statement :Updating the error covariance<br />
}<br />
else{ // Else-statement :So if X0set is false<br />
SetX0(robot); // Else-statement :Calling SetX0 function<br />
}<br />
}<br />
<br />
== Technicalities and solutions==<br />
<br />
<br />
=== Problems with Localisation block===<br />
<br />
The ambitious idea to use no assumptions, and make the robot able to have proper localisation in any arbritrary environment proved to take more time to make effective than predicted.<br />
The problem that occurred is that the angle A would sometimes get weird values. And by means of this error, the whole program produces jibberish, as the angle is widely used throughout the functions. Various strategies were used to make the program work properly. However, every time it seemed to work in a certain maze, it would provide problems in a different maze. <br />
As the localisation is of vital importance for the mapping to work, we had to think of a back-up plan if this would not work out. In order to do this, we devised several simplified versions of localisations. These versions use more assumptions, as well as an accuracy that would be less than before. However, it seemed better to have somewhat less accuracy - but knowing that the procedure would work in any maze - , than a very accurate, but unstable procedure.<br />
<br />
=== Alternative 1: Simpler localisation, using axis-alignment and phase snaps ===<br />
For this localisation method, both laser data and odometry data are used, without using a Karman filter.<br />
====Localisation method====<br />
[[File:Dirty_loca.jpg|thumb|left|Schematic view of the localisation method]] As shown in the diagram, at the initial position 1, the robot will store the distance from it to the furthest wall in front of it, which is done by finding the element of the laser data with the largest range and the corresponding angle of this element. Then, at position 2, where robot starts to turn, it will run a startpoint() function, which is a function which will again calculate its distance to the wall. Comparing the distance change between position 1 and 2, the traveled distance A can be obtained, which is used to update the global coordinates. In addition to this, it also saves the odometry data at this point.<br><br />
Then when it stops turning at position 3, the function endpoint() is called. Because it is not possible to use laser data to update global coordinates. Therefore, the change in odometry data between these two positions will be used for coordinate updating. After the turn has finished, at the end turning position 3, robot will again restore its distance to further wall, shown as distance B, which can be used to get next traveling distance. <br />
<br />
====Turn detection====<br />
Now, the difficulty lies in recognising point 2 and 3, or in other words the time at which to call the startpoint() and endpoint() functions. For this, the change of the angular data in the odometry is used, since the angular value will change because of the applied angular velocity. However, only comparing two subsequent elements to eachother is not robust, as we know from experiments that the odometry data can sometimes have very high peaks. Therefore, the angular data is stored in an array, and each iteration the current odometry angle is added to a new array, which is then compared to the old array to obtain the angular change at each of the iterations saved in the array. By then setting conditions such as the threshold value from which we consider it to be turning, the amount of elements we require to be above this value, and the amount of elements in the array, the startpoint and endpoint of a turn can be detected.<br />
<br />
In another version of this code, an exponential weighted moving average was used, which requires only one variable instead of an array. The average can very easily be updated:<br />
<br />
<math>\dot{A}_{new}=\frac{(1-w)\dot{A}_{old}+w\dot{A}_{current}}{\Delta t}</math> with <math>w<1</math> a weighting variable. <br />
<br />
A high <math>w</math> discards old values faster, whereas a low <math>w</math> gives a smoother average. If the average exceeds a certain threshold, it is detected as a turn.<br />
<br />
====Tuning====<br />
In order to get a very robust and simple code, certain assumptions were made. For instance, the turns which were detected by the odometry data were then translated to turns of a multitude of pi. Furthermore, the detected values of both the odometry and laser data were not used if they exceeded 100 [m]. <br><br />
The code itself did appear to work quite well, the angle was correct around 19/20 times, and even then it seemed to correct itself after some time. The x and y values however still needed tuning. <br><br />
<br />
====Final implementation====<br />
Two versions were implemented, in the spirit of 'divide and conquer'. One version did not completely integrate with the rest of the code in time. However, during the testing (in the maze challenge), some other flaws became apparent. First, the continuous requests for odometry data impeded other processes such as driving and reading the laserdata. Furthermore, the maze was so packed with corners that there was not much opportunity to drive straight and using the laserdata for the distance measurements, which means that the odometry was used so much that the position was no longer accurate. This also meant that some weird decisions were made, which interfered even further with the driving.<br />
<br />
=== Alternative 2: Simplest localisation, using no Kalman filter ===<br />
<br />
In order to finish a simpeler version of localisation in time the use of a kalman filter was dropped. In total three big changes were made to the algorithm :<br />
- The kalman filter was dropped in favor of measuring distances only when arriving at intersections<br />
- The precise angle calculations were dropped in favor of an angle with four possible values for the direction ( -Pi/Pi , -Pi/2, 0 and Pi/2)<br />
- The LRF calculations were simplified<br />
<br />
The kalman update function provided several bugs which we were unable to find in time. Hence the kalman filter was dropped and measurements are only done when approching an intersection. This is done since between two possible turns Pico can only drive along one axis. <br />
Because the precise angle is only needed for the original LRF calculation this can be dropped for this simplified version. Instead the angle only keeps track of the direction of the robot in order to know along which axis Pico is travelling. This angle is updated based on the decision part of the code by adding or subtracting Pi/2 for direction values of left and right respectively.<br />
Since the calculations with the LRF proved most complicated these were simplified. Instead of searching in the presumed direction of the x- and y-axis for the local minima only the shortest distance to the left wall is calculated. This is the distance to either the x- or y-axis depending on the orientation of the robot. The other axis is measured by simply picking the appropriate sensor which corresponds with the beam number which has a rotation of Pi/2. <br />
<br />
This method was abandoned because of time contraints when we realised that our way of tracking angle changes did not support non-intersection corners since there is no decision made while taking such a corner.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=20302Embedded Motion Control 2015 Group 32015-06-26T09:54:43Z<p>S119349: /* Software architecture */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3, part of the [[Embedded_Motion_Control_2015|Embedded Motion Control 2015 course]].<br />
<br />
= Checklist Wiki contents =<br />
This table shows some checkpoints that still have to be done. The first part are derived from the email from Sjoerd. At the end some general points are added. Feel free to update this table. You can also add extra checkpoints.<br />
<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
!Remarks<br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|X<br />
|Should be check for (minor) errors<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|Still has to be implemented more clearly, I think<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|Don't know, could be more specific<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|Don't know, could be more specific<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|X<br />
|Finished I think<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|X<br />
|Adding a animation would be nice<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|X<br />
|Add code on scan subpage if necessary<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|Finished?<br />
|- <br />
|7<br />
|Cohesion between main-page and subpages, especially in section 'Software implementation'<br />
|<br />
|<br />
|-<br />
|8<br />
|Check if we did not forget to describe an essential step<br />
|<br />
|<br />
|-<br />
|9<br />
|Check every part for (small) mistakes/errors<br />
|<br />
|<br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. In this case, PICO has two types of useful inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|250px|thumb|right|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|250px|thumb|right|Overall structure]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|250px|thumb|right|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|250px|thumb|right|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|250px|thumb|right|Composition pattern of Drive]] Basically, the [[Embedded_Motion_Control_2015_Group_3/Drive|Drive block]] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [[Embedded_Motion_Control_2015_Group_3/Drive#Simple_method|Simple method]] and [[Embedded_Motion_Control_2015_Group_3/Drive#Path_planning_for_turning|Path planning]]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
<br />
<br />
<br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|250px|thumb|right|Composition pattern of Scan]][[Embedded_Motion_Control_2015_Group_3/Scan|The block Scan]] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|250px|thumb|right|Composition pattern of Decision]]The [[Embedded_Motion_Control_2015_Group_3/Decision|Decision block]] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose every time the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|250px|thumb|right|Mapping & solve algorithm]] [[Embedded_Motion_Control_2015_Group_3/Mapping|The mapping block]] will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.<br />
<br />
To do this, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used, which is an implementation of Depth First Search.<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.<br />
<br />
In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation block]]. Since the localization did not work in the end, a Random Walk implementation was also made in the Mapping block.<br />
<br />
<br />
<br />
<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation page]]; by separating and discussing the important factors.<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [[Embedded_Motion_Control_2015_Group_3/Experiments|Here]] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [[Embedded_Motion_Control_2015_Group_3/Drive|Drive]] <br />
* [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Decision|Decision]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Mapping|Mapping]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Experiments|Experiments]]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation&diff=19991Embedded Motion Control 2015 Group 3/Localisation2015-06-23T17:15:40Z<p>S119349: /* Turn detection */</p>
<hr />
<div>= Localisation = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
This page goes into details about localisation.<br />
<br />
== Requirements of Localisation ==<br />
<br />
In order to be able to locate itself within its environment, the robot needs information. The required data to obtain global position are:<br />
# global x-position [m]<br />
# global y-position [m]<br />
# global a-angle [rad.]<br />
<br />
The error in the position data must be quantified and must be minimized, in order not to make mistakes in the location in the long run. For example, if the robots x-and y-coordinates differ due to an error, the robot will think is it at a different location, whereas it actually is standing still in exactly the same location and position.<br />
<br />
The sensor data required to obtain the above mentioned position data are the following:<br />
# odometry: global x [m] , global y [m] , global a [rad.]<br />
# LRF: all laser ranges [m]<br />
# velocity input to robot<br />
<br />
== Method of localisation ==<br />
The robot will need global coordinates. There are two sensors which it can use to determine these coordinates. However both sensors have their own drawbacks. <br />
*The odometry sensor provides global x-y coordinates and angle. There is not much variance in the data of the sensor, but there is a drift (bias) that will accumulate over time. The odometry data can be viewed as feedforward information for the system.<br />
*The LRF sensor provides 1000 ranges [m] with distances to objects over a scope of 270 degreess around the robot. This sensor shows no bias, but has a variance however. Furthermore the LRF data does not provide the global coordinates that we want with its raw data. Therefore this data has to be converted into position changes in terms of the global coordinates. The LRF data can be seen as the Feedback loop of the system.<br />
<br />
When the robot starts its program initially, the global coordinates will be all zero. So the start position of the robot determines the direction of the x- and y-axes. The data from the odometry and the LRF will be updated at each time instant. The odometry just works very intuitively: it tells you how far you have moved based on wheel-rotation. In the case of LRF however, the following happens: It measures the distances to the objects in the environment at time t0. It measures again at t1. The difference in distance, converted to the wanted coordinates, should be equal to the odometry data. Of course this will not happen due to the errors in the sensors, but that is why a filter is used to filter the data in between each next update step.<br />
<br />
<br />
A '''Kalman filter''' is used to filter the data obtained from odometry as well as from LRF, in order to maximize the accuracy.<br />
<br />
=== Kalman filter ===<br />
The kalman filter uses an update cycle with two steps. In the first step the new position is estimated based on the previous position and the input. An estimate of its error is then made which is used in the second step. In the second step data from measurements is used to correct the estimated position. Since the definition of the directions of the x and y axis is arbitrary, they are aligned to the corridor in which pico starts. The algorithm that is used is shown in the figure below:<br />
<br />
[[File:scheme_Kalman.gif|900px|center|thumb|Scheme Kalman]]<br />
The various variables used in the figure above, are explained here:<br />
<br />
<math>\hat{x}_k^- </math> is the predicted ahead state variable at discrete time instance <math>k</math>. This column vector consists of the global x-position, global y-position and the global angle. So logically, <math>\hat{x}_{k-1}^- </math> is the same vector at a previous time instance.<br />
<br />
<math>A</math> is an n by n matrix that relates the state at the previous time step <math> k-1 </math> to the state at the current step <math> k </math>, in the absence of either a driving function or process noise.<br />
<br />
<math>B</math> is an n by n matrix that relates the optional control input u to the state variable.<br />
<br />
<math>u_{k-1} </math> is the control input at the previous state of time. So this corresponds to a 3 by 1 column vector containing the velocities that were sent to the wheel base:<br />
# vx: translational velocity in x-direction [m/s]<br />
# vy: translational velocity in y-direction [m/s]<br />
# va: angular velocity around z axis [rad./s]<br />
<br />
<math> P_k^-</math> is a n by n matrix containing the error covariance predicted ahead at time instance <math>k</math>.<br />
<br />
<math> Q</math> is a n by n matrix containing covariance of the process noise.<br />
<br />
<math> K_k</math> is an n by n matrix that represents the Kalman gain.<br />
<br />
<br />
<math> H</math> is an n by m matrix that relates the state to the measurement <math>z_k</math>.<br />
<br />
<math> R</math> is an n by n matrix that contains the measurement noise covariance.<br />
<br />
<math> z_k</math> is the measured data in a column vector (to be compared to predictions).<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
During the second step of the kalman update both the LRF and odometry are used. For both sensors the difference between the current and last value is used to determine the position change since the last update. This value is then added to the previous positions from the kalman update. The odometry data can be used directly. For the LRF however, the x, y and a values first have to be calculated from the raw LRF data. This is done by measuring the distance to the end of the corridors. Since Pico can see 270 degrees around itself, it can always measure the distance to one end of the corridor it is in as wel as the distance to one of the side walls of the corridor. <br />
<br />
[[File:LRF_KALMAN.png|200px|center|thumb|LRF Kalman]]<br />
<br />
The estimated angle is used to calculate which sensor should point towards the end of the corridor. An interval around the corresponding LRF beam is searched for a local minimum, which should belong to the beam that hits the end perpendicularly. This beam points directly at the end of corridor and is then used to calculate the LRF value for the angle of pico. The difference is calculated between the previous and current distance to the end wall which is the position change for either x or y used in the kalman update. The other position change is calculated in similarly, but in stead of the end of the corridor the distance to the side wall is used.<br />
Since it is possible to lose sight of a wall, for instance when driving on an intersection, a safeguard is put in place. If the position change based on the lrf is to big, it is assumed that the LRF data is unreliable for that update cycle and only the odometry data is used. This is done by switching between two R matrices, one of which sets the contributions of the laserdata to zero. In the regular R matrix the contribution of the LRF data is weighed more heavily under the assumption that the LRF is more reliable overall.<br />
<br />
== Implementation of method ==<br />
Here the implementation of the method will be briefly discussed.<br />
===Interface===<br />
Localisation is a separate cpp-file which can be seen as a block. It uses the Scan class data to read sensor data. Also it uses the Drive class to retreive information on the velocity. the variables that are used as arguments are copies and are not used by reference, as they are only needed to calculate the position of the robot. The variables that are only used within the localisation class are ofcourse set to private status. After all necessary blocks are defined in the header file localisation.h , accompanied with the function prototypes, the file localisation.cpp can be properly used. The file consists of 6 functions in total:<br />
* Initializing initial position<br />
* Retreiving LRF-data and converting this into useful X,Y,A data, stored to be used in another function<br />
* Retreiving ODO-data and storing this to be used in another function<br />
* Composing the Z column-vector used in the Kalman algorithm<br />
* Composing the B matrix that is used in the Kalman algorithm<br />
* Function containing the actual Kalman algorithm, which uses the information retreived in the above function to give an accurate localisation<br />
The resulting X,Y,A coordinates are made available to the mapping block for further use. There this info can be used to pinpoint and save nodes, as well as recognizing whether the robot has visited these coordinates before and much more. See the mapping section for further details on this block.<br />
<br />
=== Function 1: initialize position: SetX0()===<br />
In this function the position is determined by means of a for-loop that loops over all of the laser ranges. It determines the minimum range by means of an if-statement. If the minimum range is found, it is stored along with its beam number. This direction is set to correspond to the global Y-direction, and the global angle A is calculated as well, using the beam number. Now the global X-axis is set by using the global Angle. These values are all stored and X,Y,A are initialized.<br />
<br />
=== Function 2: LRF-retreival and conversion: LrfLocalisation()===<br />
This function uses the range data to extrapolate the position of the robot. <br />
First the angular position A is evaluated. If this angle is larger than pi, or smaller than minus pi, it will be wrapped by subtracting or adding 2 pi respectively.<br />
This wrapped angle is used to determine the beam numbers of the current minimal ranges. This is simply done using the following relation:<br />
<br />
n_x = X[2]+robot.angle_max / robot.angle_increment;<br />
In this equation n_x corresponds to the beam number of the global X-axis and position. X[2] is the global angle A and the other parameters are self explanatory. The range belonging to this beam number is stored. The same procedure is used to determine the Y-data.<br />
<br />
In order to create robustness and in order to prevent problems in other function calls, there are some if-statements included to switch axes if necessary. If the beam number n_x is smaller than 4 or bigger than 994, it is one of the outer ranges. If one of these if-statements is true, the beam number n_x will be changed. We switch from the positive to negative axis or the other way around, by adding pi to the angle. This way, the axis can not get out of sight of the robot. Simple boolean expressions are used to notify the system whether the positive or negative axis is being tracked. This is important because it changes whether distances should be counted negative or positive. This system using axis-switches is incorporated, because there is no assumption made regarding initial position. It should be able to determine an axis to track, regardless of the initial angle of the robot in the maze.<br />
<br />
Next the values of the ranges are checked between n_x-5 and n_x+5 using a for-loop. If a minimum range is found it is stored. However, if the minimum value lies on the outer bounds of the loop, or if the difference is too small (noise) , then no minimum is found. This is done for both the x- and y-direction.<br />
<br />
Finally, using the laser distance in x- and y-direction, the difference is range is calculated with the previous function call. These differences represent the movements made between function calls, and therefore the difference in position. Every function call, the differences in each direction are added to the total of differences calculated in previous calls. These variables therefore represent the global X,Y and A position.<br />
<br />
=== Function 3: ODO-retreival: ODOLocalisation()===<br />
This function creates an odometry object and reads the odometry data from the io-layer. It starts with an initial X=0,Y=0,A=A_init . It calculates the differences each time the function is called. So if the x position is 5 meter, and the next function call it reads 5.2 meter, then the difference equals the recent reading minus the old reading. The differences are stored, as well as the old position and the sum of the differences (actual position). This info is to be used by the other functions that need it in the file.<br />
<br />
[[File:function3wiki.png|1000px|center|thumb|Odometry localisation in code for Kalman algorithm]]<br />
<br />
=== Function 4: Making Z: MakeZ()===<br />
<br />
Z is a 6-entry column vector. Every time it is called, it will update the entries as follows: <br />
The first three entries (corresponding to X_lrf,Y_lrf and A_lrf respectively) equal their global position plus the difference in position calculated in the LRF retreival function.<br />
The latter three entries correspond in the same manner to their odometry counterparts. <br />
These second X_odo,Y_odo and A_odo are updated in the same manner, but using the difference in odometry instead of LRF. <br />
The odometry differences are calculated in function 3. <br />
Now the Z-column vector is ready to be used by the Kalman filter.<br />
<br />
<br />
[[File:function4wiki.png|1000px|center|thumb|Construction of Z column-vector in code for Kalman algorithm]]<br />
<br />
=== Function 5: Making B: MakeB()===<br />
<br />
This function simply constructs the 3x3 matrix B that is used in the Kalman algorithm. The first two rows concern the X-and Y-position. Therefore the first row consists of the cos([X[2]) and the sin(X[2]) and zero respectively.<br />
Ofcourse the mirror is true for the Y-position. So the second row consists of sin([X[2]) and the cos(X[2]) and zero respectively.<br />
The third row contains zero entries for the first two columns, and a value of 1 for the angular entry.<br />
<br />
[[File:function5wiki.png|1000px|center|thumb|Construction of B matrix in code for Kalman algorithm]]<br />
<br />
=== Function 6: Kalman filter: UpdateKalman()===<br />
<br />
A short explanation on this function:<br />
First it is checked whether X0 is set. If this is true, Lrflocalisation (function 2) is called, as well as ODOlocalisation (function 3).<br />
No we have to sets of X,Y and A position, each with its on accuracy (odo drift, lrf variance).<br />
Function 4 and 5 are called next, in order to construct Z and B.<br />
If no minimum ranges were found, the R matrix is switched from R1 to R2 (only odometry data to be taken). If however, a minimum has been found, than R1 is used (both odometry and lrf info).<br />
Next the U vector is constructed, consisting of the velocity components in X,Y and A direction. <br />
The ahead projection of position is done using this data (X). If that has been done, the projected error covariance matrix P is calculated. The Kalman gain is calculated and finally the estimates are updated with the measurements, by which a filtered position is presented. <br />
<br />
If X0 was not set, the function SetX0() is called again, until X0 is set.<br />
The figure below shows the structure of the algorithm far more clearly.<br />
<br />
<br />
[[File:function6wiki.png|1400px|center|thumb|Kalman Algorithm used as filter]]<br />
<br />
== Technicalities and solutions==<br />
<br />
<br />
=== Problems with Localisation block===<br />
<br />
The ambitious idea to use no assumptions, and make the robot able to have proper localisation in any arbritrary environment proved to take more time to make effective than predicted.<br />
The problem that occurred is that the angle A would sometimes get weird values. And by means of this error, the whole program produces jibberish, as the angle is widely used throughout the functions. Various strategies were used to make the program work properly. However, every time it seemed to work in a certain maze, it would provide problems in a different maze. <br />
As the localisation is of vital importance for the mapping to work, we had to think of a back-up plan if this would not work out. In order to do this, we devised several simplified versions of localisations. These versions use more assumptions, as well as an accuracy that would be less than before. However, it seemed better to have somewhat less accuracy - but knowing that the procedure would work in any maze - , than a very accurate, but unstable procedure.<br />
<br />
=== Alternative 1: Simpler localisation, using axis-alignment and phase snaps ===<br />
For this localisation method, both laser data and odometry data are used, without using a Karman filter.<br />
====Localisation method====<br />
[[File:Dirty_loca.jpg|thumb|left|Schematic view of the localisation method]] As shown in the diagram, at the initial position 1, the robot will store the distance from it to the furthest wall in front of it, which is done by finding the element of the laser data with the largest range and the corresponding angle of this element. Then, at position 2, where robot starts to turn, it will run a startpoint() function, which is a function which will again calculate its distance to the wall. Comparing the distance change between position 1 and 2, the traveled distance A can be obtained, which is used to update the global coordinates. In addition to this, it also saves the odometry data at this point.<br><br />
Then when it stops turning at position 3, the function endpoint() is called. Because it is not possible to use laser data to update global coordinates. Therefore, the change in odometry data between these two positions will be used for coordinate updating. After the turn has finished, at the end turning position 3, robot will again restore its distance to further wall, shown as distance B, which can be used to get next traveling distance. <br />
<br />
====Turn detection====<br />
Now, the difficulty lies in recognising point 2 and 3, or in other words the time at which to call the startpoint() and endpoint() functions. For this, the change of the angular data in the odometry is used, since the angular value will change because of the applied angular velocity. However, only comparing two subsequent elements to eachother is not robust, as we know from experiments that the odometry data can sometimes have very high peaks. Therefore, the angular data is stored in an array, and each iteration the current odometry angle is added to a new array, which is then compared to the old array to obtain the angular change at each of the iterations saved in the array. By then setting conditions such as the threshold value from which we consider it to be turning, the amount of elements we require to be above this value, and the amount of elements in the array, the startpoint and endpoint of a turn can be detected.<br />
<br />
In another version of this code, an exponential weighted moving average was used, which requires only one variable instead of an array. The average can very easily be updated:<br />
<br />
<math>\dot{A}_{new}=\frac{(1-w)\dot{A}_{old}+w\dot{A}_{current}}{\Delta t}</math> with <math>w<1</math> a weighting variable. <br />
<br />
A high <math>w</math> discards old values faster, whereas a low <math>w</math> gives a smoother average. If the average exceeds a certain threshold, it is detected as a turn.<br />
<br />
====Tuning====<br />
In order to get a very robust and simple code, certain assumptions were made. For instance, the turns which were detected by the odometry data were then translated to turns of a multitude of pi. Furthermore, the detected values of both the odometry and laser data were not used if they exceeded 100 [m]. <br><br />
The code itself did appear to work quite well, the angle was correct around 19/20 times, and even then it seemed to correct itself after some time. The x and y values however still needed tuning. <br><br />
<br />
====Final implementation====<br />
In the final code, this localisation method was not used. Firstly because the communication and implementation into mapping was not done in time. And secondly, and most importantly, for this method the odometry data is called to every time. And it became apparent that calling the odometry data too often disturbs the robots ability to process commands such as the driving and reading laserdata. In other words, it made the whole of the code and robot control less robust and reliable. Therefore this code was not used.<br />
<br />
=== Alternative 2: Simplest localisation, using no Kalman filter ===<br />
<br />
In order to finish a simpeler version of localisation in time the use of a kalman filter was dropped. In total three big changes were made to the algorithm :<br />
- The kalman filter was dropped in favor of measuring distances only when arriving at intersections<br />
- The precise angle calculations were dropped in favor of an angle with four possible values for the direction ( -Pi/Pi , -Pi/2, 0 and Pi/2)<br />
- The LRF calculations were simplified<br />
<br />
The kalman update function provided several bugs which we were unable to find in time. Hence the kalman filter was dropped and measurements are only done when approching an intersection. This is done since between two possible turns Pico can only drive along one axis. <br />
Because the precise angle is only needed for the original LRF calculation this can be dropped for this simplified version. Instead the angle only keeps track of the direction of the robot in order to know along which axis Pico is travelling. This angle is updated based on the decision part of the code by adding or subtracting Pi/2 for direction values of left and right respectively.<br />
Since the calculations with the LRF proved most complicated these were simplified. Instead of searching in the presumed direction of the x- and y-axis for the local minima only the shortest distance to the left wall is calculated. This is the distance to either the x- or y-axis depending on the orientation of the robot. The other axis is measured by simply picking the appropriate sensor which corresponds with the beam number which has a rotation of Pi/2. <br />
<br />
This method was abandoned because of time contraints when we realised that our way of tracking angle changes did not support non-intersection corners since there is no decision made while taking such a corner.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation&diff=19990Embedded Motion Control 2015 Group 3/Localisation2015-06-23T17:15:32Z<p>S119349: /* Turn detection */</p>
<hr />
<div>= Localisation = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
This page goes into details about localisation.<br />
<br />
== Requirements of Localisation ==<br />
<br />
In order to be able to locate itself within its environment, the robot needs information. The required data to obtain global position are:<br />
# global x-position [m]<br />
# global y-position [m]<br />
# global a-angle [rad.]<br />
<br />
The error in the position data must be quantified and must be minimized, in order not to make mistakes in the location in the long run. For example, if the robots x-and y-coordinates differ due to an error, the robot will think is it at a different location, whereas it actually is standing still in exactly the same location and position.<br />
<br />
The sensor data required to obtain the above mentioned position data are the following:<br />
# odometry: global x [m] , global y [m] , global a [rad.]<br />
# LRF: all laser ranges [m]<br />
# velocity input to robot<br />
<br />
== Method of localisation ==<br />
The robot will need global coordinates. There are two sensors which it can use to determine these coordinates. However both sensors have their own drawbacks. <br />
*The odometry sensor provides global x-y coordinates and angle. There is not much variance in the data of the sensor, but there is a drift (bias) that will accumulate over time. The odometry data can be viewed as feedforward information for the system.<br />
*The LRF sensor provides 1000 ranges [m] with distances to objects over a scope of 270 degreess around the robot. This sensor shows no bias, but has a variance however. Furthermore the LRF data does not provide the global coordinates that we want with its raw data. Therefore this data has to be converted into position changes in terms of the global coordinates. The LRF data can be seen as the Feedback loop of the system.<br />
<br />
When the robot starts its program initially, the global coordinates will be all zero. So the start position of the robot determines the direction of the x- and y-axes. The data from the odometry and the LRF will be updated at each time instant. The odometry just works very intuitively: it tells you how far you have moved based on wheel-rotation. In the case of LRF however, the following happens: It measures the distances to the objects in the environment at time t0. It measures again at t1. The difference in distance, converted to the wanted coordinates, should be equal to the odometry data. Of course this will not happen due to the errors in the sensors, but that is why a filter is used to filter the data in between each next update step.<br />
<br />
<br />
A '''Kalman filter''' is used to filter the data obtained from odometry as well as from LRF, in order to maximize the accuracy.<br />
<br />
=== Kalman filter ===<br />
The kalman filter uses an update cycle with two steps. In the first step the new position is estimated based on the previous position and the input. An estimate of its error is then made which is used in the second step. In the second step data from measurements is used to correct the estimated position. Since the definition of the directions of the x and y axis is arbitrary, they are aligned to the corridor in which pico starts. The algorithm that is used is shown in the figure below:<br />
<br />
[[File:scheme_Kalman.gif|900px|center|thumb|Scheme Kalman]]<br />
The various variables used in the figure above, are explained here:<br />
<br />
<math>\hat{x}_k^- </math> is the predicted ahead state variable at discrete time instance <math>k</math>. This column vector consists of the global x-position, global y-position and the global angle. So logically, <math>\hat{x}_{k-1}^- </math> is the same vector at a previous time instance.<br />
<br />
<math>A</math> is an n by n matrix that relates the state at the previous time step <math> k-1 </math> to the state at the current step <math> k </math>, in the absence of either a driving function or process noise.<br />
<br />
<math>B</math> is an n by n matrix that relates the optional control input u to the state variable.<br />
<br />
<math>u_{k-1} </math> is the control input at the previous state of time. So this corresponds to a 3 by 1 column vector containing the velocities that were sent to the wheel base:<br />
# vx: translational velocity in x-direction [m/s]<br />
# vy: translational velocity in y-direction [m/s]<br />
# va: angular velocity around z axis [rad./s]<br />
<br />
<math> P_k^-</math> is a n by n matrix containing the error covariance predicted ahead at time instance <math>k</math>.<br />
<br />
<math> Q</math> is a n by n matrix containing covariance of the process noise.<br />
<br />
<math> K_k</math> is an n by n matrix that represents the Kalman gain.<br />
<br />
<br />
<math> H</math> is an n by m matrix that relates the state to the measurement <math>z_k</math>.<br />
<br />
<math> R</math> is an n by n matrix that contains the measurement noise covariance.<br />
<br />
<math> z_k</math> is the measured data in a column vector (to be compared to predictions).<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
During the second step of the kalman update both the LRF and odometry are used. For both sensors the difference between the current and last value is used to determine the position change since the last update. This value is then added to the previous positions from the kalman update. The odometry data can be used directly. For the LRF however, the x, y and a values first have to be calculated from the raw LRF data. This is done by measuring the distance to the end of the corridors. Since Pico can see 270 degrees around itself, it can always measure the distance to one end of the corridor it is in as wel as the distance to one of the side walls of the corridor. <br />
<br />
[[File:LRF_KALMAN.png|200px|center|thumb|LRF Kalman]]<br />
<br />
The estimated angle is used to calculate which sensor should point towards the end of the corridor. An interval around the corresponding LRF beam is searched for a local minimum, which should belong to the beam that hits the end perpendicularly. This beam points directly at the end of corridor and is then used to calculate the LRF value for the angle of pico. The difference is calculated between the previous and current distance to the end wall which is the position change for either x or y used in the kalman update. The other position change is calculated in similarly, but in stead of the end of the corridor the distance to the side wall is used.<br />
Since it is possible to lose sight of a wall, for instance when driving on an intersection, a safeguard is put in place. If the position change based on the lrf is to big, it is assumed that the LRF data is unreliable for that update cycle and only the odometry data is used. This is done by switching between two R matrices, one of which sets the contributions of the laserdata to zero. In the regular R matrix the contribution of the LRF data is weighed more heavily under the assumption that the LRF is more reliable overall.<br />
<br />
== Implementation of method ==<br />
Here the implementation of the method will be briefly discussed.<br />
===Interface===<br />
Localisation is a separate cpp-file which can be seen as a block. It uses the Scan class data to read sensor data. Also it uses the Drive class to retreive information on the velocity. the variables that are used as arguments are copies and are not used by reference, as they are only needed to calculate the position of the robot. The variables that are only used within the localisation class are ofcourse set to private status. After all necessary blocks are defined in the header file localisation.h , accompanied with the function prototypes, the file localisation.cpp can be properly used. The file consists of 6 functions in total:<br />
* Initializing initial position<br />
* Retreiving LRF-data and converting this into useful X,Y,A data, stored to be used in another function<br />
* Retreiving ODO-data and storing this to be used in another function<br />
* Composing the Z column-vector used in the Kalman algorithm<br />
* Composing the B matrix that is used in the Kalman algorithm<br />
* Function containing the actual Kalman algorithm, which uses the information retreived in the above function to give an accurate localisation<br />
The resulting X,Y,A coordinates are made available to the mapping block for further use. There this info can be used to pinpoint and save nodes, as well as recognizing whether the robot has visited these coordinates before and much more. See the mapping section for further details on this block.<br />
<br />
=== Function 1: initialize position: SetX0()===<br />
In this function the position is determined by means of a for-loop that loops over all of the laser ranges. It determines the minimum range by means of an if-statement. If the minimum range is found, it is stored along with its beam number. This direction is set to correspond to the global Y-direction, and the global angle A is calculated as well, using the beam number. Now the global X-axis is set by using the global Angle. These values are all stored and X,Y,A are initialized.<br />
<br />
=== Function 2: LRF-retreival and conversion: LrfLocalisation()===<br />
This function uses the range data to extrapolate the position of the robot. <br />
First the angular position A is evaluated. If this angle is larger than pi, or smaller than minus pi, it will be wrapped by subtracting or adding 2 pi respectively.<br />
This wrapped angle is used to determine the beam numbers of the current minimal ranges. This is simply done using the following relation:<br />
<br />
n_x = X[2]+robot.angle_max / robot.angle_increment;<br />
In this equation n_x corresponds to the beam number of the global X-axis and position. X[2] is the global angle A and the other parameters are self explanatory. The range belonging to this beam number is stored. The same procedure is used to determine the Y-data.<br />
<br />
In order to create robustness and in order to prevent problems in other function calls, there are some if-statements included to switch axes if necessary. If the beam number n_x is smaller than 4 or bigger than 994, it is one of the outer ranges. If one of these if-statements is true, the beam number n_x will be changed. We switch from the positive to negative axis or the other way around, by adding pi to the angle. This way, the axis can not get out of sight of the robot. Simple boolean expressions are used to notify the system whether the positive or negative axis is being tracked. This is important because it changes whether distances should be counted negative or positive. This system using axis-switches is incorporated, because there is no assumption made regarding initial position. It should be able to determine an axis to track, regardless of the initial angle of the robot in the maze.<br />
<br />
Next the values of the ranges are checked between n_x-5 and n_x+5 using a for-loop. If a minimum range is found it is stored. However, if the minimum value lies on the outer bounds of the loop, or if the difference is too small (noise) , then no minimum is found. This is done for both the x- and y-direction.<br />
<br />
Finally, using the laser distance in x- and y-direction, the difference is range is calculated with the previous function call. These differences represent the movements made between function calls, and therefore the difference in position. Every function call, the differences in each direction are added to the total of differences calculated in previous calls. These variables therefore represent the global X,Y and A position.<br />
<br />
=== Function 3: ODO-retreival: ODOLocalisation()===<br />
This function creates an odometry object and reads the odometry data from the io-layer. It starts with an initial X=0,Y=0,A=A_init . It calculates the differences each time the function is called. So if the x position is 5 meter, and the next function call it reads 5.2 meter, then the difference equals the recent reading minus the old reading. The differences are stored, as well as the old position and the sum of the differences (actual position). This info is to be used by the other functions that need it in the file.<br />
<br />
[[File:function3wiki.png|1000px|center|thumb|Odometry localisation in code for Kalman algorithm]]<br />
<br />
=== Function 4: Making Z: MakeZ()===<br />
<br />
Z is a 6-entry column vector. Every time it is called, it will update the entries as follows: <br />
The first three entries (corresponding to X_lrf,Y_lrf and A_lrf respectively) equal their global position plus the difference in position calculated in the LRF retreival function.<br />
The latter three entries correspond in the same manner to their odometry counterparts. <br />
These second X_odo,Y_odo and A_odo are updated in the same manner, but using the difference in odometry instead of LRF. <br />
The odometry differences are calculated in function 3. <br />
Now the Z-column vector is ready to be used by the Kalman filter.<br />
<br />
<br />
[[File:function4wiki.png|1000px|center|thumb|Construction of Z column-vector in code for Kalman algorithm]]<br />
<br />
=== Function 5: Making B: MakeB()===<br />
<br />
This function simply constructs the 3x3 matrix B that is used in the Kalman algorithm. The first two rows concern the X-and Y-position. Therefore the first row consists of the cos([X[2]) and the sin(X[2]) and zero respectively.<br />
Ofcourse the mirror is true for the Y-position. So the second row consists of sin([X[2]) and the cos(X[2]) and zero respectively.<br />
The third row contains zero entries for the first two columns, and a value of 1 for the angular entry.<br />
<br />
[[File:function5wiki.png|1000px|center|thumb|Construction of B matrix in code for Kalman algorithm]]<br />
<br />
=== Function 6: Kalman filter: UpdateKalman()===<br />
<br />
A short explanation on this function:<br />
First it is checked whether X0 is set. If this is true, Lrflocalisation (function 2) is called, as well as ODOlocalisation (function 3).<br />
No we have to sets of X,Y and A position, each with its on accuracy (odo drift, lrf variance).<br />
Function 4 and 5 are called next, in order to construct Z and B.<br />
If no minimum ranges were found, the R matrix is switched from R1 to R2 (only odometry data to be taken). If however, a minimum has been found, than R1 is used (both odometry and lrf info).<br />
Next the U vector is constructed, consisting of the velocity components in X,Y and A direction. <br />
The ahead projection of position is done using this data (X). If that has been done, the projected error covariance matrix P is calculated. The Kalman gain is calculated and finally the estimates are updated with the measurements, by which a filtered position is presented. <br />
<br />
If X0 was not set, the function SetX0() is called again, until X0 is set.<br />
The figure below shows the structure of the algorithm far more clearly.<br />
<br />
<br />
[[File:function6wiki.png|1400px|center|thumb|Kalman Algorithm used as filter]]<br />
<br />
== Technicalities and solutions==<br />
<br />
<br />
=== Problems with Localisation block===<br />
<br />
The ambitious idea to use no assumptions, and make the robot able to have proper localisation in any arbritrary environment proved to take more time to make effective than predicted.<br />
The problem that occurred is that the angle A would sometimes get weird values. And by means of this error, the whole program produces jibberish, as the angle is widely used throughout the functions. Various strategies were used to make the program work properly. However, every time it seemed to work in a certain maze, it would provide problems in a different maze. <br />
As the localisation is of vital importance for the mapping to work, we had to think of a back-up plan if this would not work out. In order to do this, we devised several simplified versions of localisations. These versions use more assumptions, as well as an accuracy that would be less than before. However, it seemed better to have somewhat less accuracy - but knowing that the procedure would work in any maze - , than a very accurate, but unstable procedure.<br />
<br />
=== Alternative 1: Simpler localisation, using axis-alignment and phase snaps ===<br />
For this localisation method, both laser data and odometry data are used, without using a Karman filter.<br />
====Localisation method====<br />
[[File:Dirty_loca.jpg|thumb|left|Schematic view of the localisation method]] As shown in the diagram, at the initial position 1, the robot will store the distance from it to the furthest wall in front of it, which is done by finding the element of the laser data with the largest range and the corresponding angle of this element. Then, at position 2, where robot starts to turn, it will run a startpoint() function, which is a function which will again calculate its distance to the wall. Comparing the distance change between position 1 and 2, the traveled distance A can be obtained, which is used to update the global coordinates. In addition to this, it also saves the odometry data at this point.<br><br />
Then when it stops turning at position 3, the function endpoint() is called. Because it is not possible to use laser data to update global coordinates. Therefore, the change in odometry data between these two positions will be used for coordinate updating. After the turn has finished, at the end turning position 3, robot will again restore its distance to further wall, shown as distance B, which can be used to get next traveling distance. <br />
<br />
====Turn detection====<br />
Now, the difficulty lies in recognising point 2 and 3, or in other words the time at which to call the startpoint() and endpoint() functions. For this, the change of the angular data in the odometry is used, since the angular value will change because of the applied angular velocity. However, only comparing two subsequent elements to eachother is not robust, as we know from experiments that the odometry data can sometimes have very high peaks. Therefore, the angular data is stored in an array, and each iteration the current odometry angle is added to a new array, which is then compared to the old array to obtain the angular change at each of the iterations saved in the array. By then setting conditions such as the threshold value from which we consider it to be turning, the amount of elements we require to be above this value, and the amount of elements in the array, the startpoint and endpoint of a turn can be detected.<br />
<br />
In another version of this code, an exponential weighted moving average was used, which requires only one variable instead of an array. The average can very easily be updated:<br />
<math>\dot{A}_{new}=\frac{(1-w)\dot{A}_{old}+w\dot{A}_{current}}{\Delta t}</math> with <math>w<1</math> a weighting variable. <br />
A high <math>w</math> discards old values faster, whereas a low <math>w</math> gives a smoother average. If the average exceeds a certain threshold, it is detected as a turn.<br />
<br />
====Tuning====<br />
In order to get a very robust and simple code, certain assumptions were made. For instance, the turns which were detected by the odometry data were then translated to turns of a multitude of pi. Furthermore, the detected values of both the odometry and laser data were not used if they exceeded 100 [m]. <br><br />
The code itself did appear to work quite well, the angle was correct around 19/20 times, and even then it seemed to correct itself after some time. The x and y values however still needed tuning. <br><br />
<br />
====Final implementation====<br />
In the final code, this localisation method was not used. Firstly because the communication and implementation into mapping was not done in time. And secondly, and most importantly, for this method the odometry data is called to every time. And it became apparent that calling the odometry data too often disturbs the robots ability to process commands such as the driving and reading laserdata. In other words, it made the whole of the code and robot control less robust and reliable. Therefore this code was not used.<br />
<br />
=== Alternative 2: Simplest localisation, using no Kalman filter ===<br />
<br />
In order to finish a simpeler version of localisation in time the use of a kalman filter was dropped. In total three big changes were made to the algorithm :<br />
- The kalman filter was dropped in favor of measuring distances only when arriving at intersections<br />
- The precise angle calculations were dropped in favor of an angle with four possible values for the direction ( -Pi/Pi , -Pi/2, 0 and Pi/2)<br />
- The LRF calculations were simplified<br />
<br />
The kalman update function provided several bugs which we were unable to find in time. Hence the kalman filter was dropped and measurements are only done when approching an intersection. This is done since between two possible turns Pico can only drive along one axis. <br />
Because the precise angle is only needed for the original LRF calculation this can be dropped for this simplified version. Instead the angle only keeps track of the direction of the robot in order to know along which axis Pico is travelling. This angle is updated based on the decision part of the code by adding or subtracting Pi/2 for direction values of left and right respectively.<br />
Since the calculations with the LRF proved most complicated these were simplified. Instead of searching in the presumed direction of the x- and y-axis for the local minima only the shortest distance to the left wall is calculated. This is the distance to either the x- or y-axis depending on the orientation of the robot. The other axis is measured by simply picking the appropriate sensor which corresponds with the beam number which has a rotation of Pi/2. <br />
<br />
This method was abandoned because of time contraints when we realised that our way of tracking angle changes did not support non-intersection corners since there is no decision made while taking such a corner.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation&diff=19989Embedded Motion Control 2015 Group 3/Localisation2015-06-23T17:15:09Z<p>S119349: /* Turn detection */</p>
<hr />
<div>= Localisation = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
This page goes into details about localisation.<br />
<br />
== Requirements of Localisation ==<br />
<br />
In order to be able to locate itself within its environment, the robot needs information. The required data to obtain global position are:<br />
# global x-position [m]<br />
# global y-position [m]<br />
# global a-angle [rad.]<br />
<br />
The error in the position data must be quantified and must be minimized, in order not to make mistakes in the location in the long run. For example, if the robots x-and y-coordinates differ due to an error, the robot will think is it at a different location, whereas it actually is standing still in exactly the same location and position.<br />
<br />
The sensor data required to obtain the above mentioned position data are the following:<br />
# odometry: global x [m] , global y [m] , global a [rad.]<br />
# LRF: all laser ranges [m]<br />
# velocity input to robot<br />
<br />
== Method of localisation ==<br />
The robot will need global coordinates. There are two sensors which it can use to determine these coordinates. However both sensors have their own drawbacks. <br />
*The odometry sensor provides global x-y coordinates and angle. There is not much variance in the data of the sensor, but there is a drift (bias) that will accumulate over time. The odometry data can be viewed as feedforward information for the system.<br />
*The LRF sensor provides 1000 ranges [m] with distances to objects over a scope of 270 degreess around the robot. This sensor shows no bias, but has a variance however. Furthermore the LRF data does not provide the global coordinates that we want with its raw data. Therefore this data has to be converted into position changes in terms of the global coordinates. The LRF data can be seen as the Feedback loop of the system.<br />
<br />
When the robot starts its program initially, the global coordinates will be all zero. So the start position of the robot determines the direction of the x- and y-axes. The data from the odometry and the LRF will be updated at each time instant. The odometry just works very intuitively: it tells you how far you have moved based on wheel-rotation. In the case of LRF however, the following happens: It measures the distances to the objects in the environment at time t0. It measures again at t1. The difference in distance, converted to the wanted coordinates, should be equal to the odometry data. Of course this will not happen due to the errors in the sensors, but that is why a filter is used to filter the data in between each next update step.<br />
<br />
<br />
A '''Kalman filter''' is used to filter the data obtained from odometry as well as from LRF, in order to maximize the accuracy.<br />
<br />
=== Kalman filter ===<br />
The kalman filter uses an update cycle with two steps. In the first step the new position is estimated based on the previous position and the input. An estimate of its error is then made which is used in the second step. In the second step data from measurements is used to correct the estimated position. Since the definition of the directions of the x and y axis is arbitrary, they are aligned to the corridor in which pico starts. The algorithm that is used is shown in the figure below:<br />
<br />
[[File:scheme_Kalman.gif|900px|center|thumb|Scheme Kalman]]<br />
The various variables used in the figure above, are explained here:<br />
<br />
<math>\hat{x}_k^- </math> is the predicted ahead state variable at discrete time instance <math>k</math>. This column vector consists of the global x-position, global y-position and the global angle. So logically, <math>\hat{x}_{k-1}^- </math> is the same vector at a previous time instance.<br />
<br />
<math>A</math> is an n by n matrix that relates the state at the previous time step <math> k-1 </math> to the state at the current step <math> k </math>, in the absence of either a driving function or process noise.<br />
<br />
<math>B</math> is an n by n matrix that relates the optional control input u to the state variable.<br />
<br />
<math>u_{k-1} </math> is the control input at the previous state of time. So this corresponds to a 3 by 1 column vector containing the velocities that were sent to the wheel base:<br />
# vx: translational velocity in x-direction [m/s]<br />
# vy: translational velocity in y-direction [m/s]<br />
# va: angular velocity around z axis [rad./s]<br />
<br />
<math> P_k^-</math> is a n by n matrix containing the error covariance predicted ahead at time instance <math>k</math>.<br />
<br />
<math> Q</math> is a n by n matrix containing covariance of the process noise.<br />
<br />
<math> K_k</math> is an n by n matrix that represents the Kalman gain.<br />
<br />
<br />
<math> H</math> is an n by m matrix that relates the state to the measurement <math>z_k</math>.<br />
<br />
<math> R</math> is an n by n matrix that contains the measurement noise covariance.<br />
<br />
<math> z_k</math> is the measured data in a column vector (to be compared to predictions).<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
During the second step of the kalman update both the LRF and odometry are used. For both sensors the difference between the current and last value is used to determine the position change since the last update. This value is then added to the previous positions from the kalman update. The odometry data can be used directly. For the LRF however, the x, y and a values first have to be calculated from the raw LRF data. This is done by measuring the distance to the end of the corridors. Since Pico can see 270 degrees around itself, it can always measure the distance to one end of the corridor it is in as wel as the distance to one of the side walls of the corridor. <br />
<br />
[[File:LRF_KALMAN.png|200px|center|thumb|LRF Kalman]]<br />
<br />
The estimated angle is used to calculate which sensor should point towards the end of the corridor. An interval around the corresponding LRF beam is searched for a local minimum, which should belong to the beam that hits the end perpendicularly. This beam points directly at the end of corridor and is then used to calculate the LRF value for the angle of pico. The difference is calculated between the previous and current distance to the end wall which is the position change for either x or y used in the kalman update. The other position change is calculated in similarly, but in stead of the end of the corridor the distance to the side wall is used.<br />
Since it is possible to lose sight of a wall, for instance when driving on an intersection, a safeguard is put in place. If the position change based on the lrf is to big, it is assumed that the LRF data is unreliable for that update cycle and only the odometry data is used. This is done by switching between two R matrices, one of which sets the contributions of the laserdata to zero. In the regular R matrix the contribution of the LRF data is weighed more heavily under the assumption that the LRF is more reliable overall.<br />
<br />
== Implementation of method ==<br />
Here the implementation of the method will be briefly discussed.<br />
===Interface===<br />
Localisation is a separate cpp-file which can be seen as a block. It uses the Scan class data to read sensor data. Also it uses the Drive class to retreive information on the velocity. the variables that are used as arguments are copies and are not used by reference, as they are only needed to calculate the position of the robot. The variables that are only used within the localisation class are ofcourse set to private status. After all necessary blocks are defined in the header file localisation.h , accompanied with the function prototypes, the file localisation.cpp can be properly used. The file consists of 6 functions in total:<br />
* Initializing initial position<br />
* Retreiving LRF-data and converting this into useful X,Y,A data, stored to be used in another function<br />
* Retreiving ODO-data and storing this to be used in another function<br />
* Composing the Z column-vector used in the Kalman algorithm<br />
* Composing the B matrix that is used in the Kalman algorithm<br />
* Function containing the actual Kalman algorithm, which uses the information retreived in the above function to give an accurate localisation<br />
The resulting X,Y,A coordinates are made available to the mapping block for further use. There this info can be used to pinpoint and save nodes, as well as recognizing whether the robot has visited these coordinates before and much more. See the mapping section for further details on this block.<br />
<br />
=== Function 1: initialize position: SetX0()===<br />
In this function the position is determined by means of a for-loop that loops over all of the laser ranges. It determines the minimum range by means of an if-statement. If the minimum range is found, it is stored along with its beam number. This direction is set to correspond to the global Y-direction, and the global angle A is calculated as well, using the beam number. Now the global X-axis is set by using the global Angle. These values are all stored and X,Y,A are initialized.<br />
<br />
=== Function 2: LRF-retreival and conversion: LrfLocalisation()===<br />
This function uses the range data to extrapolate the position of the robot. <br />
First the angular position A is evaluated. If this angle is larger than pi, or smaller than minus pi, it will be wrapped by subtracting or adding 2 pi respectively.<br />
This wrapped angle is used to determine the beam numbers of the current minimal ranges. This is simply done using the following relation:<br />
<br />
n_x = X[2]+robot.angle_max / robot.angle_increment;<br />
In this equation n_x corresponds to the beam number of the global X-axis and position. X[2] is the global angle A and the other parameters are self explanatory. The range belonging to this beam number is stored. The same procedure is used to determine the Y-data.<br />
<br />
In order to create robustness and in order to prevent problems in other function calls, there are some if-statements included to switch axes if necessary. If the beam number n_x is smaller than 4 or bigger than 994, it is one of the outer ranges. If one of these if-statements is true, the beam number n_x will be changed. We switch from the positive to negative axis or the other way around, by adding pi to the angle. This way, the axis can not get out of sight of the robot. Simple boolean expressions are used to notify the system whether the positive or negative axis is being tracked. This is important because it changes whether distances should be counted negative or positive. This system using axis-switches is incorporated, because there is no assumption made regarding initial position. It should be able to determine an axis to track, regardless of the initial angle of the robot in the maze.<br />
<br />
Next the values of the ranges are checked between n_x-5 and n_x+5 using a for-loop. If a minimum range is found it is stored. However, if the minimum value lies on the outer bounds of the loop, or if the difference is too small (noise) , then no minimum is found. This is done for both the x- and y-direction.<br />
<br />
Finally, using the laser distance in x- and y-direction, the difference is range is calculated with the previous function call. These differences represent the movements made between function calls, and therefore the difference in position. Every function call, the differences in each direction are added to the total of differences calculated in previous calls. These variables therefore represent the global X,Y and A position.<br />
<br />
=== Function 3: ODO-retreival: ODOLocalisation()===<br />
This function creates an odometry object and reads the odometry data from the io-layer. It starts with an initial X=0,Y=0,A=A_init . It calculates the differences each time the function is called. So if the x position is 5 meter, and the next function call it reads 5.2 meter, then the difference equals the recent reading minus the old reading. The differences are stored, as well as the old position and the sum of the differences (actual position). This info is to be used by the other functions that need it in the file.<br />
<br />
[[File:function3wiki.png|1000px|center|thumb|Odometry localisation in code for Kalman algorithm]]<br />
<br />
=== Function 4: Making Z: MakeZ()===<br />
<br />
Z is a 6-entry column vector. Every time it is called, it will update the entries as follows: <br />
The first three entries (corresponding to X_lrf,Y_lrf and A_lrf respectively) equal their global position plus the difference in position calculated in the LRF retreival function.<br />
The latter three entries correspond in the same manner to their odometry counterparts. <br />
These second X_odo,Y_odo and A_odo are updated in the same manner, but using the difference in odometry instead of LRF. <br />
The odometry differences are calculated in function 3. <br />
Now the Z-column vector is ready to be used by the Kalman filter.<br />
<br />
<br />
[[File:function4wiki.png|1000px|center|thumb|Construction of Z column-vector in code for Kalman algorithm]]<br />
<br />
=== Function 5: Making B: MakeB()===<br />
<br />
This function simply constructs the 3x3 matrix B that is used in the Kalman algorithm. The first two rows concern the X-and Y-position. Therefore the first row consists of the cos([X[2]) and the sin(X[2]) and zero respectively.<br />
Ofcourse the mirror is true for the Y-position. So the second row consists of sin([X[2]) and the cos(X[2]) and zero respectively.<br />
The third row contains zero entries for the first two columns, and a value of 1 for the angular entry.<br />
<br />
[[File:function5wiki.png|1000px|center|thumb|Construction of B matrix in code for Kalman algorithm]]<br />
<br />
=== Function 6: Kalman filter: UpdateKalman()===<br />
<br />
A short explanation on this function:<br />
First it is checked whether X0 is set. If this is true, Lrflocalisation (function 2) is called, as well as ODOlocalisation (function 3).<br />
No we have to sets of X,Y and A position, each with its on accuracy (odo drift, lrf variance).<br />
Function 4 and 5 are called next, in order to construct Z and B.<br />
If no minimum ranges were found, the R matrix is switched from R1 to R2 (only odometry data to be taken). If however, a minimum has been found, than R1 is used (both odometry and lrf info).<br />
Next the U vector is constructed, consisting of the velocity components in X,Y and A direction. <br />
The ahead projection of position is done using this data (X). If that has been done, the projected error covariance matrix P is calculated. The Kalman gain is calculated and finally the estimates are updated with the measurements, by which a filtered position is presented. <br />
<br />
If X0 was not set, the function SetX0() is called again, until X0 is set.<br />
The figure below shows the structure of the algorithm far more clearly.<br />
<br />
<br />
[[File:function6wiki.png|1400px|center|thumb|Kalman Algorithm used as filter]]<br />
<br />
== Technicalities and solutions==<br />
<br />
<br />
=== Problems with Localisation block===<br />
<br />
The ambitious idea to use no assumptions, and make the robot able to have proper localisation in any arbritrary environment proved to take more time to make effective than predicted.<br />
The problem that occurred is that the angle A would sometimes get weird values. And by means of this error, the whole program produces jibberish, as the angle is widely used throughout the functions. Various strategies were used to make the program work properly. However, every time it seemed to work in a certain maze, it would provide problems in a different maze. <br />
As the localisation is of vital importance for the mapping to work, we had to think of a back-up plan if this would not work out. In order to do this, we devised several simplified versions of localisations. These versions use more assumptions, as well as an accuracy that would be less than before. However, it seemed better to have somewhat less accuracy - but knowing that the procedure would work in any maze - , than a very accurate, but unstable procedure.<br />
<br />
=== Alternative 1: Simpler localisation, using axis-alignment and phase snaps ===<br />
For this localisation method, both laser data and odometry data are used, without using a Karman filter.<br />
====Localisation method====<br />
[[File:Dirty_loca.jpg|thumb|left|Schematic view of the localisation method]] As shown in the diagram, at the initial position 1, the robot will store the distance from it to the furthest wall in front of it, which is done by finding the element of the laser data with the largest range and the corresponding angle of this element. Then, at position 2, where robot starts to turn, it will run a startpoint() function, which is a function which will again calculate its distance to the wall. Comparing the distance change between position 1 and 2, the traveled distance A can be obtained, which is used to update the global coordinates. In addition to this, it also saves the odometry data at this point.<br><br />
Then when it stops turning at position 3, the function endpoint() is called. Because it is not possible to use laser data to update global coordinates. Therefore, the change in odometry data between these two positions will be used for coordinate updating. After the turn has finished, at the end turning position 3, robot will again restore its distance to further wall, shown as distance B, which can be used to get next traveling distance. <br />
<br />
====Turn detection====<br />
Now, the difficulty lies in recognising point 2 and 3, or in other words the time at which to call the startpoint() and endpoint() functions. For this, the change of the angular data in the odometry is used, since the angular value will change because of the applied angular velocity. However, only comparing two subsequent elements to eachother is not robust, as we know from experiments that the odometry data can sometimes have very high peaks. Therefore, the angular data is stored in an array, and each iteration the current odometry angle is added to a new array, which is then compared to the old array to obtain the angular change at each of the iterations saved in the array. By then setting conditions such as the threshold value from which we consider it to be turning, the amount of elements we require to be above this value, and the amount of elements in the array, the startpoint and endpoint of a turn can be detected.<br />
<br />
In another version of this code, an exponential weighted moving average was used, which requires only one variable instead of an array. The average can very easily be updated: <math>\dot{A}_{new}=\frac{(1-w)\dot{A}_{old}+w\dot{A}_{current}}{\Delta t}</math> with <math>w<1</math> a weighting variable. A high <math>w</math> discards old values faster, whereas a low <math>w</math> gives a smoother average. If the average exceeds a certain threshold, it is detected as a turn.<br />
<br />
====Tuning====<br />
In order to get a very robust and simple code, certain assumptions were made. For instance, the turns which were detected by the odometry data were then translated to turns of a multitude of pi. Furthermore, the detected values of both the odometry and laser data were not used if they exceeded 100 [m]. <br><br />
The code itself did appear to work quite well, the angle was correct around 19/20 times, and even then it seemed to correct itself after some time. The x and y values however still needed tuning. <br><br />
<br />
====Final implementation====<br />
In the final code, this localisation method was not used. Firstly because the communication and implementation into mapping was not done in time. And secondly, and most importantly, for this method the odometry data is called to every time. And it became apparent that calling the odometry data too often disturbs the robots ability to process commands such as the driving and reading laserdata. In other words, it made the whole of the code and robot control less robust and reliable. Therefore this code was not used.<br />
<br />
=== Alternative 2: Simplest localisation, using no Kalman filter ===<br />
<br />
In order to finish a simpeler version of localisation in time the use of a kalman filter was dropped. In total three big changes were made to the algorithm :<br />
- The kalman filter was dropped in favor of measuring distances only when arriving at intersections<br />
- The precise angle calculations were dropped in favor of an angle with four possible values for the direction ( -Pi/Pi , -Pi/2, 0 and Pi/2)<br />
- The LRF calculations were simplified<br />
<br />
The kalman update function provided several bugs which we were unable to find in time. Hence the kalman filter was dropped and measurements are only done when approching an intersection. This is done since between two possible turns Pico can only drive along one axis. <br />
Because the precise angle is only needed for the original LRF calculation this can be dropped for this simplified version. Instead the angle only keeps track of the direction of the robot in order to know along which axis Pico is travelling. This angle is updated based on the decision part of the code by adding or subtracting Pi/2 for direction values of left and right respectively.<br />
Since the calculations with the LRF proved most complicated these were simplified. Instead of searching in the presumed direction of the x- and y-axis for the local minima only the shortest distance to the left wall is calculated. This is the distance to either the x- or y-axis depending on the orientation of the robot. The other axis is measured by simply picking the appropriate sensor which corresponds with the beam number which has a rotation of Pi/2. <br />
<br />
This method was abandoned because of time contraints when we realised that our way of tracking angle changes did not support non-intersection corners since there is no decision made while taking such a corner.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19988Embedded Motion Control 2015 Group 32015-06-23T17:00:56Z<p>S119349: </p>
<hr />
<div>This is the Wiki-page for EMC-group 3, part of the [[Embedded_Motion_Control_2015|Embedded Motion Control 2015 course]].<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [[Embedded_Motion_Control_2015_Group_3/Drive|Drive block]] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [[Embedded_Motion_Control_2015_Group_3/Archive#Simple_method|Simple method]] and [[Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning|Path planning]]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][[Embedded_Motion_Control_2015_Group_3/Scan|The block Scan]] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [[Embedded_Motion_Control_2015_Group_3/Decision|Decision block]] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose every time the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [[Embedded_Motion_Control_2015_Group_3/Mapping|The mapping block]] will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.<br />
<br />
To do this, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used, which is an implementation of Depth First Search.<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.<br />
<br />
In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation block]]. Since the localization did not work in the end, a Random Walk implementation was also made in the Mapping block.<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation page]]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [[Embedded_Motion_Control_2015_Group_3/Experiments|Here]] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [[Embedded_Motion_Control_2015_Group_3/Drive|Drive]] <br />
* [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Decision|Decision]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Mapping|Mapping]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Experiments|Experiments]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Archive|Archive]] No longer used.<br />
* [[Embedded_Motion_Control_2015_Group_3/Integration|Integration]] <-- needed?<br />
* [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation]]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation&diff=19987Embedded Motion Control 2015 Group 3/Localisation2015-06-23T16:59:56Z<p>S119349: </p>
<hr />
<div>= Localisation = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
This page goes into details about localisation.<br />
<br />
== Requirements of Localisation ==<br />
<br />
In order to be able to locate itself within its environment, the robot needs information. The required data to obtain global position are:<br />
# global x-position [m]<br />
# global y-position [m]<br />
# global a-angle [rad.]<br />
<br />
The error in the position data must be quantified and must be minimized, in order not to make mistakes in the location in the long run. For example, if the robots x-and y-coordinates differ due to an error, the robot will think is it at a different location, whereas it actually is standing still in exactly the same location and position.<br />
<br />
The sensor data required to obtain the above mentioned position data are the following:<br />
# odometry: global x [m] , global y [m] , global a [rad.]<br />
# LRF: all laser ranges [m]<br />
# velocity input to robot<br />
<br />
== Method of localisation ==<br />
The robot will need global coordinates. There are two sensors which it can use to determine these coordinates. However both sensors have their own drawbacks. <br />
*The odometry sensor provides global x-y coordinates and angle. There is not much variance in the data of the sensor, but there is a drift (bias) that will accumulate over time. The odometry data can be viewed as feedforward information for the system.<br />
*The LRF sensor provides 1000 ranges [m] with distances to objects over a scope of 270 degreess around the robot. This sensor shows no bias, but has a variance however. Furthermore the LRF data does not provide the global coordinates that we want with its raw data. Therefore this data has to be converted into position changes in terms of the global coordinates. The LRF data can be seen as the Feedback loop of the system.<br />
<br />
When the robot starts its program initially, the global coordinates will be all zero. So the start position of the robot determines the direction of the x- and y-axes. The data from the odometry and the LRF will be updated at each time instant. The odometry just works very intuitively: it tells you how far you have moved based on wheel-rotation. In the case of LRF however, the following happens: It measures the distances to the objects in the environment at time t0. It measures again at t1. The difference in distance, converted to the wanted coordinates, should be equal to the odometry data. Of course this will not happen due to the errors in the sensors, but that is why a filter is used to filter the data in between each next update step.<br />
<br />
<br />
A '''Kalman filter''' is used to filter the data obtained from odometry as well as from LRF, in order to maximize the accuracy.<br />
<br />
=== Kalman filter ===<br />
The kalman filter uses an update cycle with two steps. In the first step the new position is estimated based on the previous position and the input. An estimate of its error is then made which is used in the second step. In the second step data from measurements is used to correct the estimated position. Since the definition of the directions of the x and y axis is arbitrary, they are aligned to the corridor in which pico starts. The algorithm that is used is shown in the figure below:<br />
<br />
[[File:scheme_Kalman.gif|900px|center|thumb|Scheme Kalman]]<br />
The various variables used in the figure above, are explained here:<br />
<br />
<math>\hat{x}_k^- </math> is the predicted ahead state variable at discrete time instance <math>k</math>. This column vector consists of the global x-position, global y-position and the global angle. So logically, <math>\hat{x}_{k-1}^- </math> is the same vector at a previous time instance.<br />
<br />
<math>A</math> is an n by n matrix that relates the state at the previous time step <math> k-1 </math> to the state at the current step <math> k </math>, in the absence of either a driving function or process noise.<br />
<br />
<math>B</math> is an n by n matrix that relates the optional control input u to the state variable.<br />
<br />
<math>u_{k-1} </math> is the control input at the previous state of time. So this corresponds to a 3 by 1 column vector containing the velocities that were sent to the wheel base:<br />
# vx: translational velocity in x-direction [m/s]<br />
# vy: translational velocity in y-direction [m/s]<br />
# va: angular velocity around z axis [rad./s]<br />
<br />
<math> P_k^-</math> is a n by n matrix containing the error covariance predicted ahead at time instance <math>k</math>.<br />
<br />
<math> Q</math> is a n by n matrix containing covariance of the process noise.<br />
<br />
<math> K_k</math> is an n by n matrix that represents the Kalman gain.<br />
<br />
<br />
<math> H</math> is an n by m matrix that relates the state to the measurement <math>z_k</math>.<br />
<br />
<math> R</math> is an n by n matrix that contains the measurement noise covariance.<br />
<br />
<math> z_k</math> is the measured data in a column vector (to be compared to predictions).<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
During the second step of the kalman update both the LRF and odometry are used. For both sensors the difference between the current and last value is used to determine the position change since the last update. This value is then added to the previous positions from the kalman update. The odometry data can be used directly. For the LRF however, the x, y and a values first have to be calculated from the raw LRF data. This is done by measuring the distance to the end of the corridors. Since Pico can see 270 degrees around itself, it can always measure the distance to one end of the corridor it is in as wel as the distance to one of the side walls of the corridor. <br />
<br />
[[File:LRF_KALMAN.png|200px|center|thumb|LRF Kalman]]<br />
<br />
The estimated angle is used to calculate which sensor should point towards the end of the corridor. An interval around the corresponding LRF beam is searched for a local minimum, which should belong to the beam that hits the end perpendicularly. This beam points directly at the end of corridor and is then used to calculate the LRF value for the angle of pico. The difference is calculated between the previous and current distance to the end wall which is the position change for either x or y used in the kalman update. The other position change is calculated in similarly, but in stead of the end of the corridor the distance to the side wall is used.<br />
Since it is possible to lose sight of a wall, for instance when driving on an intersection, a safeguard is put in place. If the position change based on the lrf is to big, it is assumed that the LRF data is unreliable for that update cycle and only the odometry data is used. This is done by switching between two R matrices, one of which sets the contributions of the laserdata to zero. In the regular R matrix the contribution of the LRF data is weighed more heavily under the assumption that the LRF is more reliable overall.<br />
<br />
== Implementation of method ==<br />
Here the implementation of the method will be briefly discussed.<br />
===Interface===<br />
Localisation is a separate cpp-file which can be seen as a block. It uses the Scan class data to read sensor data. Also it uses the Drive class to retreive information on the velocity. the variables that are used as arguments are copies and are not used by reference, as they are only needed to calculate the position of the robot. The variables that are only used within the localisation class are ofcourse set to private status. After all necessary blocks are defined in the header file localisation.h , accompanied with the function prototypes, the file localisation.cpp can be properly used. The file consists of 6 functions in total:<br />
* Initializing initial position<br />
* Retreiving LRF-data and converting this into useful X,Y,A data, stored to be used in another function<br />
* Retreiving ODO-data and storing this to be used in another function<br />
* Composing the Z column-vector used in the Kalman algorithm<br />
* Composing the B matrix that is used in the Kalman algorithm<br />
* Function containing the actual Kalman algorithm, which uses the information retreived in the above function to give an accurate localisation<br />
The resulting X,Y,A coordinates are made available to the mapping block for further use. There this info can be used to pinpoint and save nodes, as well as recognizing whether the robot has visited these coordinates before and much more. See the mapping section for further details on this block.<br />
<br />
=== Function 1: initialize position: SetX0()===<br />
In this function the position is determined by means of a for-loop that loops over all of the laser ranges. It determines the minimum range by means of an if-statement. If the minimum range is found, it is stored along with its beam number. This direction is set to correspond to the global Y-direction, and the global angle A is calculated as well, using the beam number. Now the global X-axis is set by using the global Angle. These values are all stored and X,Y,A are initialized.<br />
<br />
=== Function 2: LRF-retreival and conversion: LrfLocalisation()===<br />
This function uses the range data to extrapolate the position of the robot. <br />
First the angular position A is evaluated. If this angle is larger than pi, or smaller than minus pi, it will be wrapped by subtracting or adding 2 pi respectively.<br />
This wrapped angle is used to determine the beam numbers of the current minimal ranges. This is simply done using the following relation:<br />
<br />
n_x = X[2]+robot.angle_max / robot.angle_increment;<br />
In this equation n_x corresponds to the beam number of the global X-axis and position. X[2] is the global angle A and the other parameters are self explanatory. The range belonging to this beam number is stored. The same procedure is used to determine the Y-data.<br />
<br />
In order to create robustness and in order to prevent problems in other function calls, there are some if-statements included to switch axes if necessary. If the beam number n_x is smaller than 4 or bigger than 994, it is one of the outer ranges. If one of these if-statements is true, the beam number n_x will be changed. We switch from the positive to negative axis or the other way around, by adding pi to the angle. This way, the axis can not get out of sight of the robot. Simple boolean expressions are used to notify the system whether the positive or negative axis is being tracked. This is important because it changes whether distances should be counted negative or positive. This system using axis-switches is incorporated, because there is no assumption made regarding initial position. It should be able to determine an axis to track, regardless of the initial angle of the robot in the maze.<br />
<br />
Next the values of the ranges are checked between n_x-5 and n_x+5 using a for-loop. If a minimum range is found it is stored. However, if the minimum value lies on the outer bounds of the loop, or if the difference is too small (noise) , then no minimum is found. This is done for both the x- and y-direction.<br />
<br />
Finally, using the laser distance in x- and y-direction, the difference is range is calculated with the previous function call. These differences represent the movements made between function calls, and therefore the difference in position. Every function call, the differences in each direction are added to the total of differences calculated in previous calls. These variables therefore represent the global X,Y and A position.<br />
<br />
=== Function 3: ODO-retreival: ODOLocalisation()===<br />
This function creates an odometry object and reads the odometry data from the io-layer. It starts with an initial X=0,Y=0,A=A_init . It calculates the differences each time the function is called. So if the x position is 5 meter, and the next function call it reads 5.2 meter, then the difference equals the recent reading minus the old reading. The differences are stored, as well as the old position and the sum of the differences (actual position). This info is to be used by the other functions that need it in the file.<br />
<br />
[[File:function3wiki.png|1000px|center|thumb|Odometry localisation in code for Kalman algorithm]]<br />
<br />
=== Function 4: Making Z: MakeZ()===<br />
<br />
Z is a 6-entry column vector. Every time it is called, it will update the entries as follows: <br />
The first three entries (corresponding to X_lrf,Y_lrf and A_lrf respectively) equal their global position plus the difference in position calculated in the LRF retreival function.<br />
The latter three entries correspond in the same manner to their odometry counterparts. <br />
These second X_odo,Y_odo and A_odo are updated in the same manner, but using the difference in odometry instead of LRF. <br />
The odometry differences are calculated in function 3. <br />
Now the Z-column vector is ready to be used by the Kalman filter.<br />
<br />
<br />
[[File:function4wiki.png|1000px|center|thumb|Construction of Z column-vector in code for Kalman algorithm]]<br />
<br />
=== Function 5: Making B: MakeB()===<br />
<br />
This function simply constructs the 3x3 matrix B that is used in the Kalman algorithm. The first two rows concern the X-and Y-position. Therefore the first row consists of the cos([X[2]) and the sin(X[2]) and zero respectively.<br />
Ofcourse the mirror is true for the Y-position. So the second row consists of sin([X[2]) and the cos(X[2]) and zero respectively.<br />
The third row contains zero entries for the first two columns, and a value of 1 for the angular entry.<br />
<br />
[[File:function5wiki.png|1000px|center|thumb|Construction of B matrix in code for Kalman algorithm]]<br />
<br />
=== Function 6: Kalman filter: UpdateKalman()===<br />
<br />
A short explanation on this function:<br />
First it is checked whether X0 is set. If this is true, Lrflocalisation (function 2) is called, as well as ODOlocalisation (function 3).<br />
No we have to sets of X,Y and A position, each with its on accuracy (odo drift, lrf variance).<br />
Function 4 and 5 are called next, in order to construct Z and B.<br />
If no minimum ranges were found, the R matrix is switched from R1 to R2 (only odometry data to be taken). If however, a minimum has been found, than R1 is used (both odometry and lrf info).<br />
Next the U vector is constructed, consisting of the velocity components in X,Y and A direction. <br />
The ahead projection of position is done using this data (X). If that has been done, the projected error covariance matrix P is calculated. The Kalman gain is calculated and finally the estimates are updated with the measurements, by which a filtered position is presented. <br />
<br />
If X0 was not set, the function SetX0() is called again, until X0 is set.<br />
The figure below shows the structure of the algorithm far more clearly.<br />
<br />
<br />
[[File:function6wiki.png|1400px|center|thumb|Kalman Algorithm used as filter]]<br />
<br />
== Technicalities and solutions==<br />
<br />
<br />
=== Problems with Localisation block===<br />
<br />
The ambitious idea to use no assumptions, and make the robot able to have proper localisation in any arbritrary environment proved to take more time to make effective than predicted.<br />
The problem that occurred is that the angle A would sometimes get weird values. And by means of this error, the whole program produces jibberish, as the angle is widely used throughout the functions. Various strategies were used to make the program work properly. However, every time it seemed to work in a certain maze, it would provide problems in a different maze. <br />
As the localisation is of vital importance for the mapping to work, we had to think of a back-up plan if this would not work out. In order to do this, we devised several simplified versions of localisations. These versions use more assumptions, as well as an accuracy that would be less than before. However, it seemed better to have somewhat less accuracy - but knowing that the procedure would work in any maze - , than a very accurate, but unstable procedure.<br />
<br />
=== Alternative 1: Simpler localisation, using axis-alignment and phase snaps ===<br />
For this localisation method, both laser data and odometry data are used, without using a Karman filter.<br />
====Localisation method====<br />
[[File:Dirty_loca.jpg|thumb|left|Schematic view of the localisation method]] As shown in the diagram, at the initial position 1, the robot will store the distance from it to the furthest wall in front of it, which is done by finding the element of the laser data with the largest range and the corresponding angle of this element. Then, at position 2, where robot starts to turn, it will run a startpoint() function, which is a function which will again calculate its distance to the wall. Comparing the distance change between position 1 and 2, the traveled distance A can be obtained, which is used to update the global coordinates. In addition to this, it also saves the odometry data at this point.<br><br />
Then when it stops turning at position 3, the function endpoint() is called. Because it is not possible to use laser data to update global coordinates. Therefore, the change in odometry data between these two positions will be used for coordinate updating. After the turn has finished, at the end turning position 3, robot will again restore its distance to further wall, shown as distance B, which can be used to get next traveling distance. <br />
<br />
====Turn detection====<br />
Now, the difficulty lies in recognising point 2 and 3, or in other words the time at which to call the startpoint() and endpoint() functions. For this, the change of the angular data in the odometry is used, since the angular value will change because of the applied angular velocity. However, only comparing two subsequent elements to eachother is not robust, as we know from experiments that the odometry data can sometimes have very high peaks. Therefore, the angular data is stored in an array, and each iteration the current odometry angle is added to a new array, which is then compared to the old array to obtain the angular change at each of the iterations saved in the array. By then setting conditions such as the threshold value from which we consider it to be turning, the amount of elements we require to be above this value, and the amount of elements in the array, the startpoint and endpoint of a turn can be detected. <br />
<br />
====Tuning====<br />
In order to get a very robust and simple code, certain assumptions were made. For instance, the turns which were detected by the odometry data were then translated to turns of a multitude of pi. Furthermore, the detected values of both the odometry and laser data were not used if they exceeded 100 [m]. <br><br />
The code itself did appear to work quite well, the angle was correct around 19/20 times, and even then it seemed to correct itself after some time. The x and y values however still needed tuning. <br><br />
<br />
====Final implementation====<br />
In the final code, this localisation method was not used. Firstly because the communication and implementation into mapping was not done in time. And secondly, and most importantly, for this method the odometry data is called to every time. And it became apparent that calling the odometry data too often disturbs the robots ability to process commands such as the driving and reading laserdata. In other words, it made the whole of the code and robot control less robust and reliable. Therefore this code was not used.<br />
<br />
=== Alternative 2: Simplest localisation, using no Kalman filter ===<br />
<br />
In order to finish a simpeler version of localisation in time the use of a kalman filter was dropped. In total three big changes were made to the algorithm :<br />
- The kalman filter was dropped in favor of measuring distances only when arriving at intersections<br />
- The precise angle calculations were dropped in favor of an angle with four possible values for the direction ( -Pi/Pi , -Pi/2, 0 and Pi/2)<br />
- The LRF calculations were simplified<br />
<br />
The kalman update function provided several bugs which we were unable to find in time. Hence the kalman filter was dropped and measurements are only done when approching an intersection. This is done since between two possible turns Pico can only drive along one axis. <br />
Because the precise angle is only needed for the original LRF calculation this can be dropped for this simplified version. Instead the angle only keeps track of the direction of the robot in order to know along which axis Pico is travelling. This angle is updated based on the decision part of the code by adding or subtracting Pi/2 for direction values of left and right respectively.<br />
Since the calculations with the LRF proved most complicated these were simplified. Instead of searching in the presumed direction of the x- and y-axis for the local minima only the shortest distance to the left wall is calculated. This is the distance to either the x- or y-axis depending on the orientation of the robot. The other axis is measured by simply picking the appropriate sensor which corresponds with the beam number which has a rotation of Pi/2. <br />
<br />
This method was abandoned because of time contraints when we realised that our way of tracking angle changes did not support non-intersection corners since there is no decision made while taking such a corner.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive&diff=19986Embedded Motion Control 2015 Group 3/Archive2015-06-23T16:59:31Z<p>S119349: </p>
<hr />
<div>This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
<br />
<br />
<br />
<br />
<br />
This page is intentionally left blank.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments&diff=19985Embedded Motion Control 2015 Group 3/Experiments2015-06-23T16:59:10Z<p>S119349: </p>
<hr />
<div>= Experiments = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
== Experiment 1 == <br />
Date: Friday May 8 <br />
<br />
Purpose: <br />
* Working with PICO<br />
* Some calibration to check odometry and LRF data<br />
<br />
Evaluation:<br />
* There were problems between the laptop we used and PICO.<br />
<br />
== Experiment 2 == <br />
Date: Tuesday May 12<br />
<br />
Purpose:<br />
* Calibration<br />
* Do corridor challenge<br />
<br />
Evaluation:<br />
* [[Embedded_Motion_Control_2015_Group_3#Calibration|Here]] the results of the calibration can be found.<br />
* Driving through a straight corridor went very well. But we could not succeed the corridor challenge yet.<br />
<br />
== Experiment 3 == <br />
Date: Tuesday May 22<br />
<br />
Purpose:<br />
* Calibration<br />
* Do corridor challenge<br />
<br />
Evaluation:<br />
* Combining the Scan and Drive for path planning was not succesfull.<br />
* Potential field script was not ready yet. <br />
<br />
== Experiment 4 == <br />
Date: Tuesday May 29<br />
<br />
Purpose:<br />
* Test potential field<br />
* Test path planning<br />
<br />
Evaluation:<br />
* Path planning was not very succesfull.<br />
* Potential field did very well in corridors (see [[Embedded_Motion_Control_2015_Group_3#Videos|video]]). Intersections need some extra attention.<br />
<br />
== Experiment 5 ==<br />
Date: Friday June 05 <br />
<br />
Purpose:<br />
* Test potential field & decision intergration<br />
<br />
Evaluation:<br />
* Integration is almost done. Still has some problems with crossing intersection and taking corners into (short) dead ends like doors. <br />
* The collision detection behaves somewhat unexpected some times.<br />
<br />
== Experiment 6 ==<br />
Date: Friday June 12<br />
<br />
Purpose: Test everything combined. Especially, some difficult situation with doors, open spaces, dead ends and loops.<br />
<br />
Evaluation: <br />
* Localisation was not finished yet. <br />
* There were some problems with the angular corrections. Therefore, PICO sometimes drove sideways through the corridor.<br />
* We have make adjustments before next experiment on Tuesday June 16.<br />
<br />
== Experiment 7 ==<br />
Date: Tuesday June 16<br />
<br />
Purpose: <br />
*The last repetition for the A-maze-ing challenge<br />
<br />
Evaluation: <br />
* Sometimes PICO made strange choices at intersections. Probably, because the localisation was not working well. <br />
* Therefore, before tomorrow the localisation has to be adjusted. And we will make a back-up plan that is random and uses no localisation at all.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19984Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:58:19Z<p>S119349: </p>
<hr />
<div>This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
[[File:Map.png|400px|right|thumb|Example of a world model. Note that all unknown edges or impossible-to-reach edges are connected to noNode to make sure that all edges exist to ensure algorithmic integrity]]<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [[Embedded_Motion_Control_2015_Group_3/Localisation|localisation class]].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. <br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|right|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision&diff=19983Embedded Motion Control 2015 Group 3/Decision2015-06-23T16:57:57Z<p>S119349: </p>
<hr />
<div>This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
= Decision = <br />
<br />
The Decision block was made to be the highest in hierarchical order. It is called every iteration from the main. First, it calls the update function in the Scan block. One of the things this function does is recognizing the type of junction it is headed towards, when within range. It saves this junction in it's scandata, which is the class used for communication between blocks. This junction is an enum with several options:<br />
<br />
* '''Crossroad''', where it can go left, right or forward.<br />
<br />
* '''Tjunction_1''', where it can go left and right.<br />
<br />
* '''Tjunction_2''', where it can go right and forward.<br />
<br />
* '''Tjunction_3''', where it can go left and forward.<br />
<br />
* '''Deadend''', where it is at a dead end, within the range required for opening the door.<br />
<br />
* '''Corridor''', where it can see none of the above junction types, which means either the next junction is outside the range of detection, or the corridor it is following leads to a corner, which, since can be solved with the regular potential field driving, does not require a decision. <br />
<br />
<br />
Next, the Decision block calls it's algorithm function. This function sends the junction type to the Mapping block, which handles it and returns the direction the robot should take next. The algorithm then stores this direction in the Scandata. <br />
<br />
<br />
After the algorithm function finishes, the Decision part will first check if it is in a dead end. If it is, it will call to the Drive function which handles dead ends. Otherwise, it will check if it is following a corridor, then driving with the regular potential field, by first calling Scan to make the vector, then communicating this to Drive which follows it. Finally, if it is approaching an intersection and it has a direction, it will call a function constructing_walls within the Scan block. It will then call Scan to make the vector again, with virtual walls, and Drive to follow it. For each of these cases, it will also check if collision avoidance is needed, and calls this if necessary. <br />
<br />
The main function of the Decision part, which is called every iteration and handles the different calls to blocks and functions looks as follows:<br />
<br />
void update() //update checks which action is relevant now<br />
{<br />
robot.tresshold_distance=0.75; //sets the threshold every iteration<br />
if(!scan.update(robot)) {return;} //updates the scan every iteration<br />
scan.deadend(robot); //starts the check to detect dead ends<br />
if(!robot.scanRanges.empty())<br />
{<br />
//localisation.UpdateKalman(robot); //can be turned on to use the Kalman filter localisation<br />
}<br />
scan.open_spaces(robot); //starts the check to detect open spaces<br />
if(robot.junction == robot.open_space){ //and then acts if the robot is in an open space<br />
mapping.openSpace();<br />
}<br />
if(robot.counter ==0 && robot.deadend_counter ==0) //if the robot is not currently busy handling a situation<br />
{<br />
algorithm(); //use the algorithm function to determine it's direction on the next detected situation<br />
}<br />
drive.Counter(robot); //updates the counter<br />
cout<<"Door opened: "<<robot.DoorOpened<<". Waiting for door: "<<robot.WaitingForDoor<br />
<<". Door-timer: "<<drive.DoorTimer<<endl;<br />
cout<<"At door area: "<<robot.AtDoorArea<<" Dead-end?: " <<robot.dead_end<br />
<<" noDoorDeadEnd: "<<robot.NoDoor_DeadEnd<<endl;<br />
if (robot.ReadyToSendRequest &&<br />
!robot.NoDoor_DeadEnd &&<br />
!robot.WaitingForDoor) //condition check for and call to requesting for door<br />
{<br />
cout<<"Open the door!!"<<endl;<br />
drive.WaitToOpenDoor(robot);<br />
}<br />
else if (robot.WaitingForDoor) //waiting and checking if the door is opened<br />
{<br />
cout<<"Waiting!! Door request"<<endl;<br />
drive.CheckDoor(robot);<br />
}<br />
else if(robot.dead_end==1) //if none of this is met, the robot will turn 180 degrees away from the dead end<br />
{<br />
if(robot.collision_check != 0.0)<br />
{<br />
drive.CollisionAvoidance(robot);<br />
}<br />
else //if scan.door==scan.AtDoorArea &&<br />
{<br />
drive.dead_end(robot);<br />
}<br />
}<br />
else if(robot.direction == robot.follow) //follow means that the direction is set to follow a corridor<br />
{<br />
if(robot.collision_check != 0.0) //check and call to collision avoidance<br />
{<br />
drive.CollisionAvoidance(robot);<br />
}<br />
else //driving regular corridor on a potential field<br />
{<br />
scan.potential(robot);<br />
drive.PotentialField_straight_Driving(robot);<br />
}<br />
}<br />
else<br />
{<br />
scan.constructing_walls(robot); //builds virtual walls based on the situation<br />
if(robot.collision_check != 0.0) //check and call to collision avoidance<br />
{<br />
drive.CollisionAvoidance(robot);<br />
}<br />
else<br />
{<br />
scan.potential(robot); //potential field driving with the virtual walls<br />
if (robot.direction == robot.straight)<br />
{<br />
drive.PotentialField_straight_Driving(robot); //for going straight<br />
}<br />
else<br />
{<br />
drive.PotentialFieldDriving(robot); //for taking corners<br />
}<br />
}<br />
}<br />
}<br />
<br />
<br />
<br />
Within the Decision block there is also the possibility of creating a list of subsequent actions to take. This makes it possible to construct a route to any node in the network set up in the Mapping block. However, since this is not required for Trémaux's algorithm, it is not used. This code would have used entered the current action as the first element of a list, then once the action is over, it would have deleted the first element, making the second element the first element now.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision&diff=19982Embedded Motion Control 2015 Group 3/Decision2015-06-23T16:57:26Z<p>S119349: </p>
<hr />
<div>= Decision = <br />
<br />
The Decision block was made to be the highest in hierarchical order. It is called every iteration from the main. First, it calls the update function in the Scan block. One of the things this function does is recognizing the type of junction it is headed towards, when within range. It saves this junction in it's scandata, which is the class used for communication between blocks. This junction is an enum with several options:<br />
<br />
* '''Crossroad''', where it can go left, right or forward.<br />
<br />
* '''Tjunction_1''', where it can go left and right.<br />
<br />
* '''Tjunction_2''', where it can go right and forward.<br />
<br />
* '''Tjunction_3''', where it can go left and forward.<br />
<br />
* '''Deadend''', where it is at a dead end, within the range required for opening the door.<br />
<br />
* '''Corridor''', where it can see none of the above junction types, which means either the next junction is outside the range of detection, or the corridor it is following leads to a corner, which, since can be solved with the regular potential field driving, does not require a decision. <br />
<br />
<br />
Next, the Decision block calls it's algorithm function. This function sends the junction type to the Mapping block, which handles it and returns the direction the robot should take next. The algorithm then stores this direction in the Scandata. <br />
<br />
<br />
After the algorithm function finishes, the Decision part will first check if it is in a dead end. If it is, it will call to the Drive function which handles dead ends. Otherwise, it will check if it is following a corridor, then driving with the regular potential field, by first calling Scan to make the vector, then communicating this to Drive which follows it. Finally, if it is approaching an intersection and it has a direction, it will call a function constructing_walls within the Scan block. It will then call Scan to make the vector again, with virtual walls, and Drive to follow it. For each of these cases, it will also check if collision avoidance is needed, and calls this if necessary. <br />
<br />
The main function of the Decision part, which is called every iteration and handles the different calls to blocks and functions looks as follows:<br />
<br />
void update() //update checks which action is relevant now<br />
{<br />
robot.tresshold_distance=0.75; //sets the threshold every iteration<br />
if(!scan.update(robot)) {return;} //updates the scan every iteration<br />
scan.deadend(robot); //starts the check to detect dead ends<br />
if(!robot.scanRanges.empty())<br />
{<br />
//localisation.UpdateKalman(robot); //can be turned on to use the Kalman filter localisation<br />
}<br />
scan.open_spaces(robot); //starts the check to detect open spaces<br />
if(robot.junction == robot.open_space){ //and then acts if the robot is in an open space<br />
mapping.openSpace();<br />
}<br />
if(robot.counter ==0 && robot.deadend_counter ==0) //if the robot is not currently busy handling a situation<br />
{<br />
algorithm(); //use the algorithm function to determine it's direction on the next detected situation<br />
}<br />
drive.Counter(robot); //updates the counter<br />
cout<<"Door opened: "<<robot.DoorOpened<<". Waiting for door: "<<robot.WaitingForDoor<br />
<<". Door-timer: "<<drive.DoorTimer<<endl;<br />
cout<<"At door area: "<<robot.AtDoorArea<<" Dead-end?: " <<robot.dead_end<br />
<<" noDoorDeadEnd: "<<robot.NoDoor_DeadEnd<<endl;<br />
if (robot.ReadyToSendRequest &&<br />
!robot.NoDoor_DeadEnd &&<br />
!robot.WaitingForDoor) //condition check for and call to requesting for door<br />
{<br />
cout<<"Open the door!!"<<endl;<br />
drive.WaitToOpenDoor(robot);<br />
}<br />
else if (robot.WaitingForDoor) //waiting and checking if the door is opened<br />
{<br />
cout<<"Waiting!! Door request"<<endl;<br />
drive.CheckDoor(robot);<br />
}<br />
else if(robot.dead_end==1) //if none of this is met, the robot will turn 180 degrees away from the dead end<br />
{<br />
if(robot.collision_check != 0.0)<br />
{<br />
drive.CollisionAvoidance(robot);<br />
}<br />
else //if scan.door==scan.AtDoorArea &&<br />
{<br />
drive.dead_end(robot);<br />
}<br />
}<br />
else if(robot.direction == robot.follow) //follow means that the direction is set to follow a corridor<br />
{<br />
if(robot.collision_check != 0.0) //check and call to collision avoidance<br />
{<br />
drive.CollisionAvoidance(robot);<br />
}<br />
else //driving regular corridor on a potential field<br />
{<br />
scan.potential(robot);<br />
drive.PotentialField_straight_Driving(robot);<br />
}<br />
}<br />
else<br />
{<br />
scan.constructing_walls(robot); //builds virtual walls based on the situation<br />
if(robot.collision_check != 0.0) //check and call to collision avoidance<br />
{<br />
drive.CollisionAvoidance(robot);<br />
}<br />
else<br />
{<br />
scan.potential(robot); //potential field driving with the virtual walls<br />
if (robot.direction == robot.straight)<br />
{<br />
drive.PotentialField_straight_Driving(robot); //for going straight<br />
}<br />
else<br />
{<br />
drive.PotentialFieldDriving(robot); //for taking corners<br />
}<br />
}<br />
}<br />
}<br />
<br />
<br />
<br />
Within the Decision block there is also the possibility of creating a list of subsequent actions to take. This makes it possible to construct a route to any node in the network set up in the Mapping block. However, since this is not required for Trémaux's algorithm, it is not used. This code would have used entered the current action as the first element of a list, then once the action is over, it would have deleted the first element, making the second element the first element now.<br />
<br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan&diff=19981Embedded Motion Control 2015 Group 3/Scan2015-06-23T16:57:04Z<p>S119349: </p>
<hr />
<div>= Scan = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
In order to solve the maze, the robot needs to be able to drive autonomously. One type of data that is available is the laser range finder data. The PICO robot has a 270 degrees view, with approximately thousand beams. <br />
<br />
=== Potential field ===<br />
<br />
By splitting up the received laser data from the LRF in x and y, and summing them up results in a large vector containing the appropiate angle for PICO to follow. In other words, PICO moves always to the point with the most room. Note, the actual magnitude of this resultant vector is of no importance, since the Drive block has is own conditions for velocity.<br />
<br />
In straight corridors potential field will let PICO drive in the middle in a robust manner. In the case that PICO approaches a T-junction or intersection a decision must be made by the decision maker<br />
<br />
Since, there are more than one options at intersections. There has to be an extra element to send the robot in the appropriate direction. This is done, by blocking the other directions with virtual walls. In principle an extra layer has been added with the modified laser range finder data that PICO sees. From there on the potential field will do its work and PICO will drive in its desired direction.<br />
<br />
The potential field function will perceive this virtual walls as real walls. Therefore, PICO will avoid these 'walls' and drive into the desired corridor. The 'decision maker' in combination with the 'mapping algorithm' will decide were to place the virtual walls.<br />
<br />
=== Collision avoidance ===<br />
<br />
The first level of saftey is provided by the potential field algoritm. Its resultant vector will always point towards the direction with the most room and therefore it is sufficient as first layer. However, avoidance collision is one of the top priorities since if Pico bumps into the wall the attempt of solving the maze is over. Another safety layer has been implemented to prevent the robot hitting walls or corners. The distance to the wals is continuosly measured and compared to a set safety margin. If the distance of multiple coextensive beams is smaller than this fixed parameter the robot will move in the opposite direction.<br />
<br />
[[File:collision.gif|200px|thumb|right|Figure 1) Pico using collision avoidance]]<br />
<br />
=== Detection intersections ===<br />
<br />
At this stage the basic skill of driving with the potential field based on LRF data is complete. Next, the different type of junctions and intersections must be recognized in order to solve the maze. Not only is recognition necassary for driving through the maze, it is also a important part of mapping the maze, see [[Embedded_Motion_Control_2015_Group_3/Mapping Mapping]].<br />
<br />
Since the maze is axis alligned there are three possibilites:<br />
<br />
# Crossroad<br />
# T-junction<br />
# Open space<br />
<br />
The first two cases are detected by taking n+10 and n-10 and looking if they differ more than 30 centimer, in that case there is a corridor. By using this simple but very effective method left and right corridors can be distinguished. Next, detecting if there is a corridor in front. This is done by adding up multiple beams in the front and diving them by the number of beams. If this differs more than a set value with the middle one, there is a corridor. <br />
<br />
===== Open space =====<br />
<br />
When 80% of the LRF data is larger than 1 meter PICO knows it is in a open space and therefore it starts wall hugging in order to find the exit. Pico will stop this procedure if the corridor is equal or smaller than 1.5 meter, which is the maximum size of a corridor.<br />
<br />
<br />
[[File:open_space.gif|200px|thumb|right|Figure 1) Pico handling open space]]<br />
<br />
=== Constructing virtual walls ===<br />
<br />
Constructing virtual walls is an essential part of driving PICO around the maze. First individual virtual walls were constructed therefore blocking potential corridors, which lead PICO into the desired direction. At a later stage this idea was slightly modified by computing a wall on a radius; therefore, PICO will move more smoothly through a corner.<br />
<br />
<br />
===== Crossroad ===== <br />
Consider an crossroad shown in the picture below, the left plot shows what pico sees when approaching this kind of junction. There are three maxima, which represent the possible directions PICO can go to. By slightly modifying the data the actual vision as seen in the simulator can be constructed, shown in (b). In Figure 1 two minima are shown that represent the far corners between the three maxima. These provide pico with reference points from where the virtual walls are constructed. Dependent on the direction of the desired turn the corner is used as a reference point for computing the radius where the virtual walls are set.<br />
<br />
{| align = "center"<br />
|[[File:crossroad.png|400px|Figure 1) The LRF data from PICO, (a) showing the data pico retrieves in this case 3 maxima and 2 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_crossroad.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
===== T-junction =====<br />
<br />
Now a T-junction is further examined, Figure 2 (a) shows what pico sees in this case. The figure shows two maxima with in between a minima. These two maxima are used as bounds for finding the minima. When the robot turn it is hard to keep a reference point and therefore this is a good method of finding the reference point, which is a minima in this case, to construct the virtual walls.<br />
<br />
{| align = "center"<br />
|[[File:T-junction.png|400px|Figure 2) The LRF data from PICO, (a) showing the data pico retrieves in this case 2 maxima and 1 minima, (b) showing the slightly modified data to show the actual corridors]][[File:Wall_Tjunction.png|400px| In blue the original LRF data and in red the adjusted wall making LRF data. On the left the data PICO sees and on the right the modified data to recognize the corridors better as a human]]<br />
|}<br />
<br />
In the case of a T-junction the situation is slightly different, in this case using the above method the minima will not represent a corner. However locating this minima is usefull, dependent on the kind of turn, 100+n[minimum] or 100-n[minimum], a radius is computed which will represent the virtual wall.<br />
<br />
=== Door Detection ===<br />
<br />
Since a door is basicly a dead end which can be opened, PICO should search for dead ends. These have a spefic profile were always three walls are visible and connected. And since the maxe is axis aligned, two of these walls will be parallel and one, of them, the one in between perpendicular on top of the others. These dead ends can be found at the end of a corridor and in junctions longer than 0.3 meters. This means PICO has to detect dead ends/doors in front, and at the side during turns. Therefore PICO will search in three regions in its LRF-data for doors. PICO will look left, in front and right to minimize detecting faulty doors and to be able to easily tell where the door is. All regions use the same methods for detecting doors, this method is based on the profile of a dead end, and PICO tries to recognize dead ends by searching its LRF-data with some conditions. . PICO will search for the two corner points where the walls meet. In between these corner points, the dead end/door can be detected. <br />
PICO will first start by searching the left corner point within a certain range. If this point can be found, pico will search for the wall or door which should be right of this point. This will be the closest point to PICO slightly right of this corner point. If this point can be found, PICO will search for the last point which is the right corner point. See figure 3, the points which PICO must detect are highlighted by a red cross. Notice that PICO will only look for the next point if the last one is found. <br />
<br />
To prevent PICO from detecting faulty doors, two extra conditions to qualify as a door/dead end are added. When the point are found, the distance to these points and the angles are knows. This makes it possible to determine whether all points are aligned. This is done by calculating the distance of all points according to figure 3 as "distance to pico". All three lenghts should be approximately equal. <br />
The second condition is the length of the door which has to be within the bound given in the assignment thus: 0.5 > door length < 1.5. If all condition are met, this area will qualify as a daed end/door and, if the distance to PICO is small enough, PICO has reached the door area.<br />
During experiments we noticed that faulty doors could still be detected by placing the walls a little askew or due to measurement noise. Therefore the last fix was introduced. PICO has to detect dead ends/doors ten times in a row before it actualy quaslifies as a real dead end/dead. This is to eliminate measurement noise/faults. <br />
<br />
[[File:wiki_doors.png|400px|thumb|center|Figure 3) PICO detecting doors, top images showing how pico detects doors/dead ends in front. The middle images show PICO detecting a door/dead ends on the right side. Bottom images showing PICO detecting incoming doors/dead ends]]<br />
<br />
If PICO has detected a dead end/door and its positioned in the door area, PICO will immediately stop and send the door request. After this PICO will stand still for six second to wait if the doors opens. After this waiting time, PICO will look for one second whether it is still in the door area. If PICO is still in the door area this means that nothing has changed and the door has not opened. It will qualify this as a regular dead end and turn around. If PICO is not in the door area anymore, this means that the door has opened and PICO will continue its path. Since this path was directed at the door, PICO will go thru the door and search for the exit of the maze.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19980Embedded Motion Control 2015 Group 3/Drive2015-06-23T16:55:56Z<p>S119349: </p>
<hr />
<div>This page is part of the [[Embedded_Motion_Control_2015_Group_3 EMC03|CST-wiki]].<br />
<br />
= Potential field =<br />
The type of driving used in the Maze Challenge was Potential Field driving. Although Drive was initially designed to be a separate block in the software design, the potential field driving relies mainly on manipulating LRF data, and the resultant vector could almost immediately be entered into the Pico IO layer. This negated the need for a separate Driving class. The Potential Field method can as such be seen on the [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]] page. The rest of this page will be dedicated to the initial methods considered for driving through the maze.<br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
This approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail, or use it as a minimal working example in other parts of the software. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In short, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|right|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19979Embedded Motion Control 2015 Group 32015-06-23T16:55:05Z<p>S119349: </p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [[Embedded_Motion_Control_2015_Group_3/Drive|Drive block]] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [[Embedded_Motion_Control_2015_Group_3/Archive#Simple_method|Simple method]] and [[Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning|Path planning]]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][[Embedded_Motion_Control_2015_Group_3/Scan|The block Scan]] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [[Embedded_Motion_Control_2015_Group_3/Decision|Decision block]] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose every time the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [[Embedded_Motion_Control_2015_Group_3/Mapping|The mapping block]] will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.<br />
<br />
To do this, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used, which is an implementation of Depth First Search.<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.<br />
<br />
In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation block]]. Since the localization did not work in the end, a Random Walk implementation was also made in the Mapping block.<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation page]]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [[Embedded_Motion_Control_2015_Group_3/Experiments|Here]] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [[Embedded_Motion_Control_2015_Group_3/Drive|Drive]] <br />
* [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Decision|Decision]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Mapping|Mapping]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Experiments|Experiments]]<br />
* [[Embedded_Motion_Control_2015_Group_3/Archive|Archive]] No longer used.<br />
* [[Embedded_Motion_Control_2015_Group_3/Integration|Integration]] <-- needed?<br />
* [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation]]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19978Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:50:05Z<p>S119349: /* World model structure */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
[[File:Map.png|400px|right|thumb|Example of a world model. Note that all unknown edges or impossible-to-reach edges are connected to noNode to make sure that all edges exist to ensure algorithmic integrity]]<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [[Embedded_Motion_Control_2015_Group_3/Localisation|localisation class]].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. <br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|right|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19977Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:44:26Z<p>S119349: /* Mapping */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [[Embedded_Motion_Control_2015_Group_3|EMC03 CST-wiki]].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
[[File:Map.png|400px|right|thumb|Example of a world model. Note that all unknown edges or impossible-to-reach edges are connected to noNode to make sure that all edges exist to ensure algorithmic integrity]]<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. <br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|right|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19976Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:42:57Z<p>S119349: /* World model structure */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
[[File:Map.png|400px|right|thumb|Example of a world model. Note that all unknown edges or impossible-to-reach edges are connected to noNode to make sure that all edges exist to ensure algorithmic integrity]]<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. <br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|right|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19975Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:42:09Z<p>S119349: /* Schedule */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
[[File:Map.png|400px|right|thumb|Example of a world model. Note that all unknown edges or impossible-to-reach edges are connected to noNode to make sure that all edges exist to ensure algorithmic integrity]]<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|right|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=File:Map.png&diff=19974File:Map.png2015-06-23T16:41:21Z<p>S119349: uploaded a new version of "File:Map.png"</p>
<hr />
<div></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=File:Map.png&diff=19973File:Map.png2015-06-23T16:38:20Z<p>S119349: uploaded a new version of "File:Map.png"</p>
<hr />
<div></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19971Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:33:57Z<p>S119349: /* World model structure */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
[[File:Map.png|400px|right|thumb|Example of a world model. Note that all unknown edges or impossible-to-reach edges are connected to noNode to make sure that all edges exist to ensure algorithmic integrity]]<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=File:Map.png&diff=19969File:Map.png2015-06-23T16:30:31Z<p>S119349: </p>
<hr />
<div></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19968Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:11:17Z<p>S119349: /* Mapping block */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
Because the Localization block did not work in the end, and as such loops could not be resolved, a Random Walk strategy was used, so that at least the maze would (probably) be solved, and the capabilities of the other blocks could be shown.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19967Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:09:33Z<p>S119349: /* Random Walk */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: //etc </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19966Embedded Motion Control 2015 Group 3/Mapping2015-06-23T16:09:08Z<p>S119349: /* Random Walk */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
To detect and close loops (which were likely to be present) in the maze, the Mapping block needed information from Localization, which was not functional in time. Wall-following is not a viable option if the robot starts in a loop (which was in fact the case in the A-Maze-ing challenge as it turned out), so an alternative strategy must be employed. The strategy chosen is '''random walk'''. This strategy will solve any (static) maze with probability <math>1</math> as <math>t\to\infty</math>. Of course, we do not have infinite time, but trying to optimize this strategy in any way (e.g., try and avoid going in one direction all the time) ''could'' mean that it will never solve the maze, since aforementioned property only hold for a truly random walk. As a bonus, there is also a certain probability that it will optimally solve a maze regardless of its properties, which was in fact the case for our maze challenge run.<br />
<br />
Since the software for the mapping was almost completely decoupled from the rest, the random walk could be implemented easily. It will still get an intersection type from the Decision Maker, and will still return a decision. It was decided not to have U-turns in the random walk (except for dead ends, of course), since it's trivially shown that a U-turn can always be achieved with a combination of other decisions, and U-turns would in general severely impede progress through the maze.<br />
<br />
Throughout the course, it was always a balance act between decoupling and ease of implementation. In this particular case, ease of implementation was favoured, since the random walk block merely tries to solve the maze and highlight (successfully so) the capabilities of all other blocks. Furthermore, this block was focused on robustness; as such, even though in the below example, <code>case 2:</code> could be used, <code>default</code> was chosen to eliminate the risk of accidental bugs.<br />
<br />
<code><br />
srand (time(NULL)); //Initialize random number generator<br />
int direction =0; //Initialize direction<br />
switch(type){<br />
case FourWay:<br />
direction = rand()%3;<br />
switch(direction){<br />
case 0:<br />
return Left;<br />
break;<br />
case 1:<br />
return Forward;<br />
break;<br />
default:<br />
return Right;<br />
break;<br />
}<br />
break;<br />
case LeftJunction: </code></div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19963Embedded Motion Control 2015 Group 3/Mapping2015-06-23T15:37:07Z<p>S119349: </p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.<br />
<br />
= Random Walk =<br />
Random walk</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19962Embedded Motion Control 2015 Group 32015-06-23T15:36:41Z<p>S119349: /* Mapping block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision block] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping The mapping block] will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.<br />
<br />
To do this, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used, which is an implementation of Depth First Search.<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.<br />
<br />
In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation block]. Since the localization did not work in the end, a Random Walk implementation was also made in the Mapping block.<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive] No longer used.<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19961Embedded Motion Control 2015 Group 32015-06-23T15:35:38Z<p>S119349: /* Mapping block */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision block] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping The mapping block] will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.<br />
<br />
To do this, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used, which is an implementation of Depth First Search.<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.<br />
<br />
In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the Localisation block.<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive] No longer used.<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19960Embedded Motion Control 2015 Group 3/Drive2015-06-23T15:26:56Z<p>S119349: /* Path planning for turning */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field =<br />
The type of driving used in the Maze Challenge was Potential Field driving. Although Drive was initially designed to be a separate block in the software design, the potential field driving relies mainly on manipulating LRF data, and the resultant vector could almost immediately be entered into the Pico IO layer. This negated the need for a separate Driving class. The Potential Field method can as such be seen on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan] page. The rest of this page will be dedicated to the initial methods considered for driving through the maze.<br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
This approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail, or use it as a minimal working example in other parts of the software. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In short, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|right|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19959Embedded Motion Control 2015 Group 3/Drive2015-06-23T15:21:39Z<p>S119349: </p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field =<br />
The type of driving used in the Maze Challenge was Potential Field driving. Although Drive was initially designed to be a separate block in the software design, the potential field driving relies mainly on manipulating LRF data, and the resultant vector could almost immediately be entered into the Pico IO layer. This negated the need for a separate Driving class. The Potential Field method can as such be seen on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan] page. The rest of this page will be dedicated to the initial methods considered for driving through the maze.<br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
This approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail, or use it as a minimal working example in other parts of the software. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In short, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19958Embedded Motion Control 2015 Group 32015-06-23T15:20:38Z<p>S119349: </p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision block] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping The mapping block] will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used: .<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br><br><br><br><br><br><br />
The schedule looks like this:<br />
<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
*Translation from chosen edge to turn command:<br />
**The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive] No longer used.<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive&diff=19957Embedded Motion Control 2015 Group 3/Archive2015-06-23T15:19:07Z<p>S119349: </p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
<br />
<br />
<br />
<br />
<br />
This page is intentionally left blank.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive&diff=19956Embedded Motion Control 2015 Group 3/Archive2015-06-23T15:18:42Z<p>S119349: Replaced content with '= Archive MAYBE WE CAN JUST DELETE THIS STUFF AND PAGE? =
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-…'</p>
<hr />
<div>= Archive MAYBE WE CAN JUST DELETE THIS STUFF AND PAGE? = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
This page is intentionally left blank.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19955Embedded Motion Control 2015 Group 32015-06-23T15:18:17Z<p>S119349: /* Archive */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br><br><br><br><br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision block] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping The mapping block] will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used: .<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br><br><br><br><br><br><br />
The schedule looks like this:<br />
<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
*Translation from chosen edge to turn command:<br />
**The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
=== Localisation block ===<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
=== Run 1 ===<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
=== Run 2 ===<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] was used as an archive, but is currently not used.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19954Embedded Motion Control 2015 Group 3/Mapping2015-06-23T15:03:36Z<p>S119349: /* Schedule */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as 'prevNode', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19953Embedded Motion Control 2015 Group 3/Mapping2015-06-23T14:52:52Z<p>S119349: /* Tremaux Algorithm */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have '''two''' lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is '''definitely''' explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19952Embedded Motion Control 2015 Group 3/Mapping2015-06-23T14:52:15Z<p>S119349: /* Schedule */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge an option according to Tremaux, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have **two** lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is **definitely** explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping&diff=19951Embedded Motion Control 2015 Group 3/Mapping2015-06-23T14:51:28Z<p>S119349: /* Schedule */</p>
<hr />
<div>= Mapping = <br />
This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Mapping block =<br />
The mapping block contains a very high-level model of the world. The mapping has been created in such a way that only essential information is stored, in order to create a very flexible and modular world model.<br />
<br />
For solving the maze, a variant of the [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html Tremaux algorithm] is used. The Tremaux algorithm is an implementation of DFS (Depth First Search), which proves to be an efficient way of solving a maze with minimum backtracking.<br />
<br />
= World model structure =<br />
The maze will consist of nodes and edges; i.e., an undirected graph. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. An edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is the direction that is advised, based on the Tremaux algorithm.<br />
<br />
Since the maze is axis-aligned, a simplified coordinate system can be used, which only has four principal directions (in simple terms, 'Up', 'Down', 'Left', and 'Right'). Although the node/edge structure can in principle work for a non-axis-aligned maze, the current implementation has some methods specific to an axis-aligned maze, to reduce the complexity of the implementation. This is expressed in two ways: it is assumed that Pico can only drive in any of the principal directions, and that the junctions can only be of specific formats, namely (from Pico's perspective): T-junction ╦, left junction ╣, right junction ╠ four-way intersection ╬ and dead end ╥ . <br />
<br />
For each node, the following information is stored:<br />
* Position. The position is used to identify and close loops within the maze, by matching a new node with a previous node. The position is defined in global coordinates. To obtain the global coordinates, we will use the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation localisation class].<br />
* Adjacent corridors. Since the maze is axis-aligned, there can be anything between one (dead end) and four (cross-intersection) corridors/edges leading to a node. Because of this, the corridors are stored in an array with each element corresponding to a (global) direction.<br />
<br />
For each corridor/edge, the following information is stored:<br />
* Number of times Pico has traversed a corridor. This is important for Tremaux algorithm, which will be explained later.<br />
* Travel time for a corridor. This can be used to give priorities in case multiple options are present. This may later be used to define weighting functions to choose a next corridor.<br />
<br />
<br />
<br />
In the implementation, the graph is stored in a BGL (Boost Graphing Library) Graph object. This means that most of the overhead of maintaining an undirected graph is done by an existing library. The library used is also extremely extendable by means of Bundled Properties, which facilitate an arbitrary number of properties for nodes and edges. The main disadvantage is the syntactic overhead generated especially because BGL is so extendable, although the overhead is mostly contained into making the basic graph objects, which only has to be done once.<br />
<br />
= Schedule =<br />
[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm (update?)]]<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
<br />
//Where is the node located?<br />
cv::Point2d nodePosition = getNodePos();<br />
//Check if we're going to an already known node (in which case we won't do any matching)<br />
if(nextNode == noNode){<br />
// We do not know anything abbout this node yet, so let's see if we can match it with another one.<br />
//Iterate over all nodes, and pick the closest.<br />
double minDistance = HUGE_VAL; //By definition, more than the largest double you'll ever get.<br />
Node match = noNode; //Initialize to noNode, so we can check if there was no match.<br />
NodeIt node, lastNode; //Set up for the for loop iterator magic.<br />
for(tie(node,lastNode) = vertices(maze); node!=lastNode; ++node){ //Iterator magic.<br />
//Skip checking the distance if the node we're checking is an undiscovered node.<br />
if(*node!=this->noNode)<br />
{<br />
double distance = cv::norm(nodePosition-maze[*node].position); //Calculate distance (2-norm)<br />
if(distance<SAME_NODE_TOLERANCE && distance < minDistance){<br />
match = *node; //Set the match to the close node we found.<br />
minDistance = distance; //Set the new minimum for the closest node.<br />
}<br />
}<br />
}<br />
if(match==noNode){<br />
// Previous loop did not result in a match, so: new node encountered! Let's create it.<br />
match = add_vertex(NodeInfo(nodePosition),maze);<br />
this->nextNode = match; //Now we know where we were going to (was noNode because we didn't know before)<br />
<br />
//Initialize each corridor to default (we don't know yet where they lead)<br />
Corridor defaultCorr = add_edge(nextNode,noNode,maze).first;<br />
for(int i=0; i<4; ++i){ //Iterate through all four possible directions for a node and set them to default.<br />
maze[nextNode].corridor[i]=defaultCorr;<br />
}<br />
// But we do know where we came from, so set the edge from whence we came.<br />
this->currentCorridor = add_edge(this->prevNode,this->nextNode,maze).first; //Make a new edge<br />
maze[prevNode].corridor[this->prevNodeExitedAt] = this->currentCorridor; //Update edge of previous node<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor; //Idem for next node. (D+2)%4 = flip direction.<br />
} else {<br />
// Revisted old node.<br />
this->nextNode = match;<br />
maze[nextNode].corridor[flipD(this->currentDirection)] = this->currentCorridor;<br />
}<br />
<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
<br />
//Above process updated nextNode to be a proper node in our system. Let's update the corridor.<br />
this->maze[this->currentCorridor].timesVisited += 1;<br />
this->maze[this->currentCorridor].travelTime = scanvars.timeStamp - this->lastTimeStamp;<br />
this->lastTimeStamp = scanvars.timeStamp;<br />
<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times'' (not currently implemented due to previous concerns in the door detection robustness.). If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow any unvisited edge.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, any edge that is visited only once.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
**Above points can be summarized as 'pick the least visited edge'. If formulated this way, the algorithm will also allow for 'accidents', e.g., an edge visited more than two times, which should definitely be avoided, and will never stop exploring the maze even if it thinks it is back at the starting point, which improves robustness.<br />
**In case there is more than one edge to explore, a cost function will determine the best option to follow. This cost function penalizes turning over going straight ahead, severely penalizes U-turns, and incorporates the travel time of known edges (shorter being better, since that will explore as much options as possible in a given time). The cost function could be extended by analyzing the entire maze as a whole, to determine which direction will explore as much of the maze as possible in the shortest time. Although currently not implemented, this again shows a huge advantage of using an existing graphing library, since graph traversal is built-in in the library.<br />
<br />
//Done with the mapping process. Now, let's pick a node to go to!<br />
int minTimesVisited = HUGE_VAL;<br />
double minCost = HUGE_VAL;<br />
int decidedDirection; //GLOBAL COORDINATES<br />
for(int direction = 0; direction<4; ++direction){<br />
//check if this direction is actually possible<br />
if(isPossibleDirection(type,direction)){<br />
int thisCorridorTimesVisited = maze[maze[nextNode].corridor[direction]].timesVisited;//Boost can only access properties of edges and vertices as maze[edge] or maze[vertex]<br />
if(thisCorridorTimesVisited <= minTimesVisited){<br />
if(thisCorridorTimesVisited<minTimesVisited) { minCost = HUGE_VAL;}<br />
minTimesVisited = thisCorridorTimesVisited;<br />
double cost=getCost(direction);<br />
if(cost<minCost){<br />
decidedDirection = direction;<br />
}<br />
}<br />
}<br />
<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a global direction pointing from the node to the direction of the edge. The robot must receive a command that will guide it through the maze in local coordinates. Globally, we can assume that there are only four possible directions because of the axis-aligned maze: North, west, south and east. These directions are represented by the integers 0, 1, 2 and 3 respectively in our code. <br />
* The actual command is formulated<br />
<br />
int decision = (decidedDirection - this->currentDirection + 4)%4; //From global to local coordinates<br />
return static_cast<WhereToNow>(decision); //Cast from int to enum.<br />
<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Tremaux Algorithm =<br />
<br />
Tremaux' algorithm is an implementation of Depth-First Search. It is best visualized as walking through the maze with a big bucket of paint and a paint brush. Continuously, the maze solver will drag the brush behind him, creating a line on the floor. When an intersection is encountered, the solver will see if any corridors do not have a line of paint on the floor. He will take one of those corridors, since he hasn't explored that bit of maze yet. At some point, the solver will have no unpainted corridors to go to, so he will have to backtrack. At that point, a corridor will have **two** lines on the floor, which indicates that not only has the solver been there, but there was also nothing left for him to explore there. As such, corridors with two lines on the floor should be avoided, and corridors with no lines on the floor should be preferred.<br />
<br />
A simple video of how this should be visualized can be found [https://www.youtube.com/watch?v=gVSEJdSQZVQ here]. <br />
<br />
In our implementation, we chose not to equip Pico with a bucket of paint, but instead use the world model we created. Pico will update an integer for each corridor which indicates the number of times he visited a certain corridor. Then, Pico simply prefers the corridor with the least amount of visits. In theory, this should be always 0, 1 or 2 times (with 0 times preferred over 1 time, and 2 times should be generally avoided). However, we decided to also allow for integers greater than 2, in case the world model has miscounted something - in this case, 2 times is preferred over 3 times, because the latter means that a certain area is **definitely** explored more than necessary.<br />
<br />
Furthermore, we extended Tremaux with a priority model. In principle, Tremaux does not describe to do when there are multiple possibilities - it does not matter for solving a maze if time is not an issue. We would however like to quickly explore as many nodes as possible, so instead of just picking a random direction, we pick the direction which minimizes the travel time to a next node. For example, turning around is generally not preferred, and the shortest corridor to the next node is chosen if at all possible.<br />
<br />
This can in theory be extended with a more elaborate searches. For example, to try not to go to a node that will lead to only twice-visited edges, which should be perfectly possible by using some built-in algorithms from Boost. However, this is not implemented yet.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3&diff=19937Embedded Motion Control 2015 Group 32015-06-23T13:57:12Z<p>S119349: /* General information */</p>
<hr />
<div>This is the Wiki-page for EMC-group 3. <br />
<br />
<br />
= Checklist Wiki contents =<br />
{| border="1" class="wikitable"<br />
|-<br />
! <br />
!<br />
!<math>{{Check}}</math><br />
|- <br />
|1.1<br />
|Overview software architecture and approach<br />
|<br />
|-<br />
|1.2<br />
|How does it map to the paradigms explained in this course?<br />
|<br />
|-<br />
|2.1<br />
|Description why our solution is awesome (nice images)<br />
|<br />
|- <br />
|2.2<br />
|Why unique/ what are we proud of?<br />
|<br />
|- <br />
|3.1<br />
|What difficult problems and how solved?<br />
|<br />
|-<br />
|4.1<br />
|Evaluation maze challenge (well/wrong/why/improvements?)<br />
|<br />
|-<br />
|5.1<br />
|videos / gifs / animations / diagrams / pictures<br />
|<br />
|- <br />
|6.1<br />
|Link to interesting pieces of the code (use snippet system like https://gist.github.com)<br />
|<br />
|-<br />
|6.2 <br />
| Comment the code and add introduction/explanatory<br />
| <br />
|- <br />
| 6.3 <br />
| Make seperate section called 'code'<br />
| <br />
|}<br />
<br />
= Group members = <br />
{| border="1" class="wikitable"<br />
|-<br />
! Name <br />
! Student number<br />
! Email<br />
|-<br />
| Max van Lith<br />
| 0767328<br />
| m.m.g.v.lith@student.tue.nl<br />
|-<br />
| Shengling Shi<br />
| 0925030<br />
| s.shi@student.tue.nl<br />
|- <br />
| Michèl Lammers<br />
| 0824359<br />
| m.r.lammers@student.tue.nl<br />
|-<br />
| Jasper Verhoeven<br />
| 0780966<br />
| j.w.h.verhoeven@student.tue.nl<br />
|- <br />
| Ricardo Shousha<br />
| 0772504<br />
| r.shousha@student.tue.nl<br />
|-<br />
| Sjors Kamps<br />
| 0793442<br />
| j.w.m.kamps@student.tue.nl<br />
|- <br />
| Stephan van Nispen<br />
| 0764290<br />
| s.h.m.v.nispen@student.tue.nl<br />
|-<br />
| Luuk Zwaans<br />
| 0743596<br />
| l.w.a.zwaans@student.tue.nl<br />
|-<br />
| Sander Hermanussen<br />
| 0774293<br />
| s.j.hermanussen@student.tue.nl<br />
|-<br />
| Bart van Dongen<br />
| 0777752<br />
| b.c.h.v.dongen@student.tue.nl<br />
|}<br />
<br />
= General information = <br />
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.<br />
<br />
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.<br />
<br />
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:<br />
# The laser data from the laser range finder<br />
# The odometry data from the wheels<br />
<br />
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).<br />
<br />
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.<br />
<br />
= Design =<br />
In this section the general design of the project is discussed.<br />
<br />
=== Requirements ===<br />
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:<br />
* Move and reach the exit of the maze.<br />
* The robot should avoid bumping into the walls. <br />
* So, it should perceive its surroundings.<br />
* The robot has to solve the maze in a 'smart' way.<br />
<br />
=== Functions & Communication ===<br />
[[File:behaviour_diagram.png|thumb|left|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills. <br />
<br />
The task are the most high level proceedings the robot should be able to do. These are:<br />
*Determine situation<br />
*Decision making<br />
*Skill selection<br />
<br />
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:<br />
*Drive<br />
*Rotate<br />
*Scan environment<br />
*Handle intersections<br />
*Handle dead ends<br />
*Discover doors<br />
*Mapping environment<br />
*Make decisions based on the map<br />
*Detect the end of the maze<br />
<br />
=== Software architecture ===<br />
<br />
[[File:Overrall structure.jpg|thumb|left|Static LRF]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure below shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections. <br />
<br />
Lets start with the Scan block:<br />
* Scan receives information about the environment. To do this it uses his laser range finder data.<br />
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.<br />
* Drive interprets the vector and sends the robot in that direction.<br />
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.<br />
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.<br />
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive. <br />
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.<br />
<br />
=== Calibration ===<br />
<p>[[File:Originaldata.png|thumb|left|Figure 1 - Calibration: Difference between odometry and LRF]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. 'Figure 1 - Calibration' shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:<br />
*The laserdata and the odometry data do not return exactly the same values.<br />
*The odometry seems to produce no noise at all.<br />
<br />
[[File:StaticLRF.png|thumb|left|alt=Static LRF|Figure 2 - Calibration: Static LRF]]<br />
<br />
The noisy signal that was returned by the laser is presented in 'Figure 2 - Calibration'. Here, a part of the laser data is picked from a robot that was not moving.<br />
* The maximum amplitude of the noise is roughly 12 mm.<br />
* The standard deviation of the noise is roughly 5.5 mm<br />
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.<br />
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p><br />
<br><br><br><br><br><br><br><br><br />
<br />
= Software implementation =<br />
In this section, implementation of this software will be discussed based on the block division we made.<br />
<br />
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.<br />
<br />
=== Drive block ===<br />
[[File:Drive.jpg|thumb|left|CP of Drive]] Basically, the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive block] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do. <br />
<br />
Two other methods were also considered: [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Simple_method Simple method] and [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive#Path_planning_for_turning Path planning]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.<br />
<br><br><br><br><br><br />
<br />
=== Scan block ===<br />
[[File:Scan_cp_new.jpg|thumb|left|Composition pattern Scan]][http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan The block Scan] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks. <br />
<br />
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.<br />
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.<br />
# Mapping also uses data from scan to map the maze.<br />
<br />
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about these properties:<br />
<br />
* Potential field<br />
* Detecting junctions/intersections<br />
* Virtual walls<br />
* Collision avoidance<br />
* Detecting doors<br />
<br />
=== Decision block ===<br />
[[File:Composition_Pattern_Decision.png|thumb|left|Composition pattern Decision]]The [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision block] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.<br />
<br />
<ins>Input:</ins><br />
* Mapping model<br />
* Scan data<br />
<br />
<ins>Output:</ins><br />
* Specific drive action command<br />
<br />
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose everytime the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.<br />
<br />
=== Mapping block ===<br />
[[File:Emc03 wayfindingCP1.png|thumb|left|Map&solve algorithm]] [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping The mapping block] will store the corridors and junctions of the maze. Therefore, the decision block can consider certain possiblilities, to ensure that the maze will be solved in a strategic way.<br />
<br />
As is said in the previous paragraph, the Tremaux algorithm is used: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].<br />
<br />
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.<br />
<br />
The schedule looks like this:<br />
* Updating the map:<br />
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.<br />
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.<br />
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.<br />
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.<br />
* Choosing a new direction:<br />
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from<br />
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.<br />
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.<br />
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.<br />
* Translation from chosen edge to turn command:<br />
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.<br />
* The actual command is formulated<br />
* A set-up is made for the next node<br />
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.<br />
<br />
= Localisation =<br />
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.<br />
<br />
The localisation algorithm is explained in on the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation page]; by separating and discussing the important factors.<br />
<br />
= Code = <br />
This section highlights several interesting parts of our code.<br />
<br />
...<br />
<br />
= A-maze-ing Challenge =<br />
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:<br />
# Make a script only based on the corridor challenge.<br />
# Make a script for the corridor challenge but with clear references to the final maze challenge.<br />
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..<br />
<br />
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.<br />
<br />
== Run 1 ==<br />
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.<br />
<br />
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.<br />
<br />
== Run 2==<br />
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.<br />
<br />
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!<br />
<br />
= Experiments =<br />
Seven experiments are done during the course. [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Here] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.<br />
<br />
= Files & presentations =<br />
<br />
# Initial design document (week 1): [[File:init_design.pdf]]<br />
# First presentation (week 3): [[File:Group3_May6.pdf]]<br />
# Second presentation (week 6): [[File:Group3_May27.pdf]]<br />
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]<br />
<br />
= Videos = <br />
<br />
Experiment 4: Testing the potential field on May 29, 2015.<br />
* https://youtu.be/UAZqDMAHKq8<br />
<br />
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.<br />
* https://www.youtube.com/watch?v=fzsNA2OUwww<br />
<br />
Maze challenge: Winning attempt! on June 17, 2015.<br />
* https://www.youtube.com/watch?v=UHz_41Bsi7c<br />
<br />
= Archive = <br />
[http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive This page] contains alternative design that is not used in the end.<br />
To see what we have worked on during the entiry process, it can be interesting to look at some of these ideas.<br />
<br />
= EMC03 CST-wiki sub-pages =<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive Drive] <br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Scan Scan]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Decision Decision]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Mapping Mapping]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Experiments Experiments]<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Archive Archive] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Integration Integration] <-- needed?<br />
* [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Localisation Localisation]</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19936Embedded Motion Control 2015 Group 3/Drive2015-06-23T13:50:15Z<p>S119349: /* Potential field */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field =<br />
The type of driving used in the Maze Challenge was Potential Field driving. Although Drive was initially designed to be a separate block in the software design, the potential field driving relies mainly on manipulating LRF data, and the resultant vector could almost immediately be entered into the Pico IO layer. This negated the need for a separate Driving class. The Potential Field method can be seen on this TODO LINK page. The rest of this page will be dedicated to the initial methods considered for driving through the maze.<br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
This approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail, or use it as a minimal working example in other parts of the software. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In short, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19927Embedded Motion Control 2015 Group 3/Drive2015-06-23T13:45:34Z<p>S119349: /* Simple method */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field = <br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
This approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail, or use it as a minimal working example in other parts of the software. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In short, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19922Embedded Motion Control 2015 Group 3/Drive2015-06-23T13:43:28Z<p>S119349: /* Old drive methods */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field = <br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
The first approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In brief, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19921Embedded Motion Control 2015 Group 3/Drive2015-06-23T13:43:18Z<p>S119349: /* Driving through corners */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field = <br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
==== Driving through corners ====<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
The first approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In brief, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19919Embedded Motion Control 2015 Group 3/Drive2015-06-23T13:42:59Z<p>S119349: /* Old drive methods */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field = <br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
For the corridor challenge, we tried a driving method based on extracting some key points, but that turned out not to be robust. After the corridor challenge, we then considered three options, of which potential field driving came out as a winner. The two other options use the key point method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developed as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
=== Driving through corners ===<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.<br />
<br />
=== ''' Simple method''' ===<br />
The first approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In brief, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.</div>S119349https://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3/Drive&diff=19916Embedded Motion Control 2015 Group 3/Drive2015-06-23T13:39:13Z<p>S119349: /* Drive using key points */</p>
<hr />
<div>This page is part of the [http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2015_Group_3 EMC03 CST-wiki].<br />
<br />
= Potential field = <br />
<br />
= Collision avoidance =<br />
This is a function that is used to drive through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. <br />
<br />
= Old drive methods = <br />
Before, we had decided that we were going to use potential field driving, we had two other options. These two options use the same method for driving through straight corridors but use a different approach to taking corners. We called the first one the 'simple method'. Basically, this method is developted as a back-up plan. The second approach uses path planning to drive around corners.<br />
<br />
=== ''' Simple method''' ===<br />
The first approach is the most simple one; this also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail. In addition, we have learned a lot from it and used is as base for the other methods.<br />
<br />
In brief, the simple method contains 3 steps:<br />
# Drive to corridor without collision.<br />
# Detect opening (left of right) and stop in front of it.<br />
# Turn 90 degrees and start driving again.<br />
<br />
This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.<br />
<br />
=== ''' Path planning for turning '''===<br />
[[File:turningpath.png|thumb|left|alt=puntje.|Not the right figure yet]] The path planning is the second method that we worked on. Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. <br />
<br />
Then basically, <br />
* Driving straight stops;<br />
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;<br />
* Driving straight again.<br />
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.<br />
<br />
=== Drive using key points ===<br />
The initial driving method we considered was based on selecting key points for various situations, and updating the driving inputs according to those. It turned out that robustly determining the key points was virtually impossible in corners, and the following algorithms would fail whenever the key points were lost. <br />
<br />
It should be noted that the Drive class is not responsible for deciding whether it is driving in a corridor, so the algorithms are not robust for varying situations - it expects another class to determine exactly what algorithm should be used at any given time.<br />
<br />
==== Straight through corridors ====<br />
[[File:Drivec.png|thumb|right|Corridor driving]]<br />
The key points for driving through corridors are the points closest to the robot on the left and right side. Since these points will also be perpendicular to a straight wall, they give all sufficient information (distance from wall as well as direction of wall) for driving through a straight corridor.<br />
<br />
First, the angle of the left and right wall is bisected, which gives a direction straight through the corridor. The deviation between the direction of the robot and aforementioned direction results in an angular velocity for the robot through a simple gain.<br />
<br />
Next, a set forward velocity vector is created, in the direction straight through the corridor, (Vf in the picture). Furthermore, the distance from the center line results in a sideways velocity vector. These vectors combined give the desired driving vector (bold black arrow in the picture), which is then transformed to the Vx and Vy relative to the robot since the robot may be rotated.<br />
<br />
This type of driving was successfully used during the corridor challenge, and resulted in smooth and fast driving.<br />
<br />
=== Driving through corners ===<br />
[[File:Drive1.png|thumb|right|Corner driving]]<br />
The key points for driving through corners are the two corner points of the corridor Pico is trying to enter. This turned out to be the biggest letdown of this method; extracting the local minimums required while turning was almost impossible to do robustly. Mostly because of this, other options were explored. <br />
<br />
On the figure on the right, one option for driving through corners is explored. The robot maintains a constant forward velocity, and makes the corner by varying the angular velocity. First, the desired direction is determined. This is the bisection of the direction to the two corner points, since this vector always points to the exact center of the corridor. Then, an angular velocity is calculated based on the difference between the desired direction and the current robot direction ('error'). A simple gain controller was used, and varying the gain influenced how 'tight' the corner would be made.</div>S119349