MRC/Tutorials/Setting up the PICO simulator: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
 
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
= Installing Gazebo =
= Introduction =


We will use the robot simulator [http://gazebosim.org/ Gazebo] to simulate the PICO robot and its environment. This includes simulation of the physics and sensors of the robot. To install Gazebo:
During the course, we have 10 large groups and only one robot, so test time on the robot is scarce. Fortunately, we have a virtual (software) representation of the robot that can be used to simulate the robot. At the moment it:


# Open a terminal (''ctrl-alt-t'')
* Simulates the movement of the robot
# Setup your computer to accept software from packages.osrfoundation.org:<pre>sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu precise main" > /etc/apt/sources.list.d/gazebo-latest.list'</pre>
* The laser data of the robot, created by the virtual environment
# Retrieve and install the keys for the Gazebo repositories:<pre>wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add -</pre>
* Provide a simple visualization
# Update apt-get and install Gazebo.<pre>sudo apt-get update</pre><pre>sudo apt-get install gazebo</pre>
# Initialize.<pre>gzserver</pre> Wait until you get a message like ''Publicized address: ...''.


You can try if your Gazebo installation was successfull by either executing a gazebo client on a seperate terminal (''i.e.'', while ''gzserver'' is running):
This should already be enough to get you started, and more will be added later (odometry, moving doors, better dynamics, etc).


<pre>gzclient</pre>
= Updating the simulator =


Or by running the server and client at the same time with one command:
Before you start working with the simulator, make sure you have our latest version by running


<pre>gazebo</pre>
<pre>
emc-update
</pre>


If you run into trouble, for example, Gazebo gives a segmentation fault and crashes, take a look at [[Embedded_Motion_Control_2013/FAQ#Gazebo | this page]].
This will download the latest changes and compile the updated software (framework and simulator). I will notify you when I made changes to the software such that you can run the updater, but feel free to run the command whenever you want.


= Configuring the PICO simulator =
= Starting the simulator =


Now you have installed Gazebo, we can almost start using the PICO simulator. However, we first need to ''compile'' the ROS packages that we checked-out in the ''~/ros/emc'' folder before:
As was already stated before, we will not be using ROS in this course, unless you really want to use it yourself. However, secretly the provided tools are build on top of that; the inter-process communication to be more specific. Don't worry about it too much. The only thing you need to know is that before running the simulator or your software, you need to create a ''roscore''. Simply open a terminal an run:


# Open a terminal ''(ctrl-alt-t)''
<pre>
# Make sure ''rosdep'' is initialized and up to date:<pre>sudo rosdep init</pre><pre>rosdep update</pre>
roscore
# Build and compile the PICO simulator and other necessary packages: <pre>rosmake pico_gazebo gazebo_map_spawner</pre>
</pre>


Furthermore, Gazebo needs to know where to find the robot description (located in ''pico_description'') which includes its meshes, textures, kinematic chain, etc, and where to find the plugins for the controllers and sensors. This information can be set in the environment variables ''GAZEBO_PLUGIN_PATH'' and ''GAZEBO_MODEL_PATH''. You only need to set this information once:
and keep it running. This allows processes to 'find' each other and communicate. For example, the software abstraction layer we provide 'talks' to the simulator through this roscore.


# Open a terminal ''(ctrl-alt-t)''
Enough about ROS, let's start the simulator! Open a terminal and run
# Open ''.bashrc'':<pre>gedit ~/.bashrc</pre>
# Add the following lines:<pre>export GAZEBO_PLUGIN_PATH=~/ros/emc/general/pico_gazebo/lib:~/ros/emc/general/tue_gazebo_plugins/lib:$GAZEBO_PLUGIN_PATH</pre><pre>export GAZEBO_MODEL_PATH=~/ros/emc/general/pico_description:$GAZEBO_MODEL_PATH</pre>
# and source your ''.bashrc'':<pre>source ~/.bashrc</pre>or start a new terminal.


= Starting the Simulator =
<pre>
emc-sim
</pre>


# Start Gazebo:<pre>gazebo</pre>
That's it! What you will see, is a visualization of the virtual environment (the white lines are walls) and a top-down view of the robot (the red lines).
# Using another terminal, spawn the maze:<pre>rosrun gazebo_map_spawner spawn_maze</pre>
# Then spawn PICO:<pre>roslaunch pico_gazebo pico.launch</pre>


Notice that the PICO robot is spawned in the Gazebo world. The Gazebo GUI shows how the world actually '''is'''. We can also visualize how the robot perceives it through its sensors, by using the ROS tool [http://ros.org/wiki/rviz Rviz]. You can start RViz with a pre-defined config showing most of PICO' sensors using:
= Controlling the robot =


<pre>rosrun pico_visualization rviz</pre>
Now, as you can see, not much is happening. The robot is standing still in a static environment. Let's change that! A first simple way to test the simulator is by controlling the robot using your keyboard. Just run:


In fact, this simply runs the following command:
<pre>
pico-teleop
</pre>


<pre>rosrun rviz rviz -d ~/ros/general/pico_visualization/rviz/pico.rviz</pre>
and you will be able to control the robot using the ''numpad'' keys. You can rotate the robot using '/' and '*'.


If you are running the Gazebo simulation, you will see the PICO robot model and white dots which represent the sensor data originating from the (simulated) laser range finder. RViz allows you to visualize many more things. For example, to show the data from the camera:
= Visualization =


# Click on the ''Add'' button in the lower left
As you already noticed, the simulator pops up a visualization windows showing the robot in the virtual world. This shows how the world ''is'' (at least in simulation...). It is also very interesting to know how to robot ''perceives'' the world through its sensors. Note that this is a big difference! We also created a visualization for that. Just run
# Select ''Camera'' and click ''OK''. A Camera item will pop up in the ''Displays'' view on the left.
# Expand the Camera item by clicking the arrow on the left
# Click in the field right next to ''Image Topic'' and click the drop down arrow. Now you can select the camera topic (''/pico/asusxtion/rgb/image_color_rect''). Make sure you press enter.


You will see the camera images visualized in the lower left of your screen.
<pre>
emc-viz
</pre>


= Examples =
You will again see the robot footprint displayed in red. However, the walls are gone and replaced by the actual (simulated) sensor data from the robot (the green dots). The laser data is actually represented in polar coordinates (remember, the LRF is a laser with a rotating mirror, so what it will return are angles and distances) but the visualization 'projects' them back to world coordinates. See how the laser data changes even if the robot is standing still: this is the simulated noise added to the data.
Here are some examples on how to use the simulator and how to practice the corridor competition in simulation.  


== PICO Driving Example ==
Try driving the robot around using 'pico-teleop' and notice how the laser data changes, and how it differs from the actual virtual world.


# Have a look at the file '''pico_node.cpp''' in the ''src'' folder of the '''pico_example''' package. You should be able to understand what the program will do.
= Stopping the simulator and visualization =
# Build the package:<pre>rosmake pico_example</pre>
# Run the node (make sure the simulator is still running):<pre>rosrun pico_example pico_node</pre>Check the result in both Gazebo and RViz.
# Feel free to use this example as a start for your project


== PICO Safe Driving Example ==
You can stop the simulator and visualization by pressing ''ctrl-C'' in the terminal.


# First of all, make sure you have the latest version of the '''pico_example''' package:
= Loading a custom heightmap =
## <pre>roscd pico_example</pre>
## <pre>svn up</pre>
# Take some time to have a good look at the file '''safe_drive.cpp''' in the '''src''' folder of the '''pico_example''' package. It contains a quite elaborate explanation of what is going on in the code, which will hopefully clarify quite some things.
# Build the package:<pre>rosmake pico_example</pre>
# Run the node (make sure the simulator is still running):<pre>rosrun pico_example safe_drive</pre>Check the result in both Gazebo and RViz.
# Feel free to use this example as a start for your project


<!--
By default, the simulator loads a pretty ugly maze. Fortunately, you can change the simulation environment very easily! You just have to create a heightmap: an image that specifies for each pixel how 'high' the world has to be at that point. Since the laser only scans at one height, we can use two extremes here: flat (ground level) or high enough to be detected by the laser.
== Corridor ==


An extra Gazebo world has been added, which contains a simple corridor with a side exit.
To create an image, simply open your favourite image editor and create a black-and-white image. If you don't know how to create an image, have a look at the section below. White corresponds to the floor, black to the walls (i.e., which will be detected by the laser). You have to keep the following things in mind:


# First, make sure you SVN-update the pico_simulator package, i.e.:<pre>roscd pico_simulator</pre><pre>svn up</pre>
* The simulator converts the pixels to world coordinates. Each pixel corresponds to a block of 2.5 by 2.5 cm, or in other words, 40 pixels correspond to 1 meter.
# Then, you can start the simulator with the new world by typing:<pre>roslaunch pico_simulator start_corridor.launch</pre>
* The robot always starts in the center of the image. So, if you want to robot to start at the edge of your maze / corridor, just create a bigger image and move the black pixels to the upper part of the image.
* You'll get the best result if the lines drawn are at least 2 pixels wide
* Store your image lossless, ''i.e.'' using the ''png'' format (which is recommended by the way!), instead of the ''jpg'' format.
* Make sure your heightmap images always have a border of white pixels, ''i.e.'' there should '''not''' be any non-white pixels at the borders of your image


=== Changing the initial robot pose ===
Once you have created an image, simply run the simulator and provide the image file as argument:


As you can see, PICO does not start exactly in the center of the corridor, and its rotation is also slightly off. This is done to make sure that you don't rely on a ''perfect'' initial position, ''i.e.'', you have to be robust against such deviations. It might be nice to play around with different initial poses. To do so, have a look at the launch file ''start_corridor.launch'' in the ''launch'' subfolder. You should see a line like this:
<pre>
emc-sim --map <YOUR_IMAGE_FILE>.png
</pre>


<pre><node name="spawn_pico" pkg="gazebo" type="spawn_model" args="-unpause -urdf -param robot_description -model pico -x 5.3 -y 0 -Y 1.87" respawn="false" output="screen" /></pre>
That's it!


This line starts the ''spawn_model'' node from the ''gazebo'' ROS package, with some arguments as defined in ''args''. In this argument list you see ''-x'', ''-y'' and ''-Y'', which define the initial X-position, Y-position and rotation (in radians) respectively. Play around with these values to see if your algorithm is robust against different initial poses.
== Create a heigthmap image ==


== Changing the corridor ==
There are many linux applications that you use to create images. We suggest using Gimp, an open-source alternative to Photoshop. Although it might be a bit overkill to use for our application, it has great support and the thing we want to do (draw black lines on a white background) isn't hard. To install Gimp, run:


It may also be nice to change the corridor, ''e.g.'', changing the position of the exit. To do so, simply edit the file '''tue_corridor.png''' in the ''./Media/materials/textures'' folder of the ''pico_simulator'' package. As you can see, this is a simple image in which white pixels represent free space, and black pixels represent walls. Of course, you need to restart the simulator to see the effects of your change.
<pre>
-->
sudo apt-get install gimp
</pre>


= Troubleshoot =
And run it using:
<pre>
gimp
</pre>


=== Gazebo does not stop gracefully upon exit or interrupt (ctrl-c) ===
Then to create a simple image:


You may get the warning:<pre>Warning [gazebo_main.cc:59] escalating to SIGKILL on server</pre> when stopping Gazebo. This is a known bug and has no consequences, other than that it takes a bit longer to kill Gazebo.
* Select ''File'' -> ''New'' (or ''ctrl-N'')
* Specify the size of your image. Remember, 40 pixels = 1 meter, and the robot starts in the center
* Select the ''Pencil Tool'' from the left pane.
* Set the pencil size in the lower left pane to something sensible, ''e.g.'' 2 pixels


Now, if you click left on the image, a dot is drawn, but we want lines! To draw a line:


* left click, ''then'' hold ''shift'', ''then'' left click again.
* While holding the ''shift'' button, you can click more times to create a sequence of lines
* To create nice, axis-aligned lines, also hold ''ctrl''


That's all folks! You are now ready to start your own ROS package, and create a maze solving robot!
There is two types of saving in Gimp. The first one is the using ''File'' -> ''Save''. However, this will only generate an ''xcf''-file, something that can only be read by Gimp. Instead, you should use the ''File'' -> ''Export'' option:
 
* ''File'' -> ''Export''
* Provide a name for your file. If you put '.png' as extension, it will be saved as png
* Use the default ''png'' export settings
 
That's it!
 
== Adding clutter objects and simulating wheelslip ==
 
In the real world, the odometry that is returned by the robot is not perfect and will have a certain amount of drift.
Furthermore, the internal representation of the map never matches 100% with the real world.
To add these discrepancies to the simulator (which are disabled by default), a JSON config file can be supplied.
In this file it is easy to specify an array of objects that will be added to the world and to enable wheelslip.
An example of this file is shown below:
 
<pre>
{
  "uncertain_odom":true,
  "show_full_map":true,
  "objects":[
    {
      "length": 1.0,
      "width": 1.0,
      "trigger_radius": 1.0,
      "repeat": false,
      "velocity": 0.6,
      "init_pose": [-1.0, 0.0, 0.0],
      "final_pose": [-1.0, -2.0, 0.0]
    },
    {
      "length": 0.2,
      "width": 0.4,
      "trigger_radius": 1.0,
      "repeat": true,
      "velocity": 1.3,
      "init_pose": [1.4, 0.0, 0.0],
      "final_pose": [1.4, 1.0, 0.0]
    }
  ]
}
</pre>
 
In this file, the "uncertain_odom" is enabled, and the robot is visualized in the static map by setting "show_full_map".
Furthermore, two objects are added, which will start to move from their "init_pose" to (approximately) their "final_pose", supplied in (x,y,theta) coordinates.
This movement is triggered when pico comes within the "trigger_radius" of the object, and the second object will keep repeating its movement.
To supply such a file to the simulator, use the following argument:
 
<pre>
emc-sim --config <YOUR_CONFIG_FILE>.json
</pre>
 
The wheelslip is simulated by random sampling a slip factor every few seconds, making the simulator stochastic.
The result is that, just like with the real robot, no two trials will result in exactly the same robot position and odometry.
 
 
== Adding doors (not needed for this year)==
 
The real maze will contain a door that the robot needs to open. The simulator is capable of simulating these doors, such that you can test your software before it gets real! To add doors to the simulated world, simply edit your heightmap and add ''grey'' lines to it. To be specific:
 
* The door should be a straight line (but not necessarily axis-aligned), with a minimum thickness of 2 pixels. To be more specific, all pixels in the door line should be connected
* The average RGB value should be between 50 and 205 (on a scale of 0-255). So for example, the RGB value (100, 100, 100) is a door, while (20, 20, 20) is a wall (and (255, 255, 255) is open space).
* You can have multiple doors in your world. However, make sure that they are always separated by at least one pixel. ''Remember that there will only be one door, i.e, one sliding block, in the final maze'', but the possibility of having multiple doors to test with in simulation might be nice.
* You can experiment with the direction in which it slides open. Darker lines (RGB values below 128) will open in a different direction than lighter lines (RGB values above 128).
 
An example: [[Media:Emc2015-heightmap-example.png | Height Map]]. Note that doors as skewed as the left one in the example will not occur in the final maze (but are allowed in the simulation, for you to test with).
 
 
The doors should be visible as green blocks in the simulator. Now, you can use the ''emc::IO'' object from your code to open these doors (but only if you are standing in front):
 
<pre>
// ... Make sure robot is in front of a possible door
// ... Make sure robot is standing still
 
io.sendRequestOpenDoor();
</pre>

Latest revision as of 12:09, 9 April 2020

Introduction

During the course, we have 10 large groups and only one robot, so test time on the robot is scarce. Fortunately, we have a virtual (software) representation of the robot that can be used to simulate the robot. At the moment it:

  • Simulates the movement of the robot
  • The laser data of the robot, created by the virtual environment
  • Provide a simple visualization

This should already be enough to get you started, and more will be added later (odometry, moving doors, better dynamics, etc).

Updating the simulator

Before you start working with the simulator, make sure you have our latest version by running

emc-update

This will download the latest changes and compile the updated software (framework and simulator). I will notify you when I made changes to the software such that you can run the updater, but feel free to run the command whenever you want.

Starting the simulator

As was already stated before, we will not be using ROS in this course, unless you really want to use it yourself. However, secretly the provided tools are build on top of that; the inter-process communication to be more specific. Don't worry about it too much. The only thing you need to know is that before running the simulator or your software, you need to create a roscore. Simply open a terminal an run:

roscore

and keep it running. This allows processes to 'find' each other and communicate. For example, the software abstraction layer we provide 'talks' to the simulator through this roscore.

Enough about ROS, let's start the simulator! Open a terminal and run

emc-sim

That's it! What you will see, is a visualization of the virtual environment (the white lines are walls) and a top-down view of the robot (the red lines).

Controlling the robot

Now, as you can see, not much is happening. The robot is standing still in a static environment. Let's change that! A first simple way to test the simulator is by controlling the robot using your keyboard. Just run:

pico-teleop

and you will be able to control the robot using the numpad keys. You can rotate the robot using '/' and '*'.

Visualization

As you already noticed, the simulator pops up a visualization windows showing the robot in the virtual world. This shows how the world is (at least in simulation...). It is also very interesting to know how to robot perceives the world through its sensors. Note that this is a big difference! We also created a visualization for that. Just run

emc-viz

You will again see the robot footprint displayed in red. However, the walls are gone and replaced by the actual (simulated) sensor data from the robot (the green dots). The laser data is actually represented in polar coordinates (remember, the LRF is a laser with a rotating mirror, so what it will return are angles and distances) but the visualization 'projects' them back to world coordinates. See how the laser data changes even if the robot is standing still: this is the simulated noise added to the data.

Try driving the robot around using 'pico-teleop' and notice how the laser data changes, and how it differs from the actual virtual world.

Stopping the simulator and visualization

You can stop the simulator and visualization by pressing ctrl-C in the terminal.

Loading a custom heightmap

By default, the simulator loads a pretty ugly maze. Fortunately, you can change the simulation environment very easily! You just have to create a heightmap: an image that specifies for each pixel how 'high' the world has to be at that point. Since the laser only scans at one height, we can use two extremes here: flat (ground level) or high enough to be detected by the laser.

To create an image, simply open your favourite image editor and create a black-and-white image. If you don't know how to create an image, have a look at the section below. White corresponds to the floor, black to the walls (i.e., which will be detected by the laser). You have to keep the following things in mind:

  • The simulator converts the pixels to world coordinates. Each pixel corresponds to a block of 2.5 by 2.5 cm, or in other words, 40 pixels correspond to 1 meter.
  • The robot always starts in the center of the image. So, if you want to robot to start at the edge of your maze / corridor, just create a bigger image and move the black pixels to the upper part of the image.
  • You'll get the best result if the lines drawn are at least 2 pixels wide
  • Store your image lossless, i.e. using the png format (which is recommended by the way!), instead of the jpg format.
  • Make sure your heightmap images always have a border of white pixels, i.e. there should not be any non-white pixels at the borders of your image

Once you have created an image, simply run the simulator and provide the image file as argument:

emc-sim --map <YOUR_IMAGE_FILE>.png

That's it!

Create a heigthmap image

There are many linux applications that you use to create images. We suggest using Gimp, an open-source alternative to Photoshop. Although it might be a bit overkill to use for our application, it has great support and the thing we want to do (draw black lines on a white background) isn't hard. To install Gimp, run:

sudo apt-get install gimp

And run it using:

gimp

Then to create a simple image:

  • Select File -> New (or ctrl-N)
  • Specify the size of your image. Remember, 40 pixels = 1 meter, and the robot starts in the center
  • Select the Pencil Tool from the left pane.
  • Set the pencil size in the lower left pane to something sensible, e.g. 2 pixels

Now, if you click left on the image, a dot is drawn, but we want lines! To draw a line:

  • left click, then hold shift, then left click again.
  • While holding the shift button, you can click more times to create a sequence of lines
  • To create nice, axis-aligned lines, also hold ctrl

There is two types of saving in Gimp. The first one is the using File -> Save. However, this will only generate an xcf-file, something that can only be read by Gimp. Instead, you should use the File -> Export option:

  • File -> Export
  • Provide a name for your file. If you put '.png' as extension, it will be saved as png
  • Use the default png export settings

That's it!

Adding clutter objects and simulating wheelslip

In the real world, the odometry that is returned by the robot is not perfect and will have a certain amount of drift. Furthermore, the internal representation of the map never matches 100% with the real world. To add these discrepancies to the simulator (which are disabled by default), a JSON config file can be supplied. In this file it is easy to specify an array of objects that will be added to the world and to enable wheelslip. An example of this file is shown below:

{
  "uncertain_odom":true,
  "show_full_map":true,
  "objects":[
    {
      "length": 1.0,
      "width": 1.0,
      "trigger_radius": 1.0,
      "repeat": false,
      "velocity": 0.6,
      "init_pose": [-1.0, 0.0, 0.0],
      "final_pose": [-1.0, -2.0, 0.0]
    },
    {
      "length": 0.2,
      "width": 0.4,
      "trigger_radius": 1.0,
      "repeat": true,
      "velocity": 1.3,
      "init_pose": [1.4, 0.0, 0.0],
      "final_pose": [1.4, 1.0, 0.0]
    }
  ]
}

In this file, the "uncertain_odom" is enabled, and the robot is visualized in the static map by setting "show_full_map". Furthermore, two objects are added, which will start to move from their "init_pose" to (approximately) their "final_pose", supplied in (x,y,theta) coordinates. This movement is triggered when pico comes within the "trigger_radius" of the object, and the second object will keep repeating its movement. To supply such a file to the simulator, use the following argument:

emc-sim --config <YOUR_CONFIG_FILE>.json

The wheelslip is simulated by random sampling a slip factor every few seconds, making the simulator stochastic. The result is that, just like with the real robot, no two trials will result in exactly the same robot position and odometry.


Adding doors (not needed for this year)

The real maze will contain a door that the robot needs to open. The simulator is capable of simulating these doors, such that you can test your software before it gets real! To add doors to the simulated world, simply edit your heightmap and add grey lines to it. To be specific:

  • The door should be a straight line (but not necessarily axis-aligned), with a minimum thickness of 2 pixels. To be more specific, all pixels in the door line should be connected
  • The average RGB value should be between 50 and 205 (on a scale of 0-255). So for example, the RGB value (100, 100, 100) is a door, while (20, 20, 20) is a wall (and (255, 255, 255) is open space).
  • You can have multiple doors in your world. However, make sure that they are always separated by at least one pixel. Remember that there will only be one door, i.e, one sliding block, in the final maze, but the possibility of having multiple doors to test with in simulation might be nice.
  • You can experiment with the direction in which it slides open. Darker lines (RGB values below 128) will open in a different direction than lighter lines (RGB values above 128).

An example: Height Map. Note that doors as skewed as the left one in the example will not occur in the final maze (but are allowed in the simulation, for you to test with).


The doors should be visible as green blocks in the simulator. Now, you can use the emc::IO object from your code to open these doors (but only if you are standing in front):

// ... Make sure robot is in front of a possible door
// ... Make sure robot is standing still

io.sendRequestOpenDoor();