Mobile Robot Control 2023 Group 15: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Tag: 2017 source edit
(2 intermediate revisions by the same user not shown)
Line 45: Line 45:


<br />
<br />
===Navigation Assignment 2:===
The code for the Navigation-2 assignment can be found in the group repository in Gitlab, within the folder "Navigation-2".
The idea behind the used approach is based on the Artificial Potential Field Algorithm. The laser-data of the robot is used to detect objects. When these objects are within a specified range of the robot, a vector from the object to the robot is created. This vector is inversely scaled based on the distance of the object to the robot. In this approach local coordinates are used with the robot at the center of the coordinate frame. All the calculated vectors are stored in a list.
Coordinates for a goal are also defined. These are however globally defined, and are converted to local coordinates based on the odometry data of the robot. A vector from the robot to the goal is also drawn, this vector is however scaled to a fixed length, and not dependent on the distance of the robot towards the goal.
The list of vectors is summed to create a resulting force vector. Based on the orientation of this vector, the angular velocity of the robot is adjusted. The forward velocity is normally kept constant, however when the robot needs to make a very sharp turn, the robot decides to decrease the forward velocity and thus also decreasing it's turn radius.
This concept idea has resulted in the robot being able to solve the corridor maze on the simulator:
''Video:'' https://youtu.be/9UnAluLiUOM
As well as that the robot is able to solve the corridor maze in real life during experimentation:
''Video:'' https://youtu.be/Q__zXivK0Xg
===Practical exercises week 2:===
===Practical exercises week 2:===

Revision as of 11:07, 17 May 2023

Group members:

Caption
Name student ID
Tobias Berg 1607359
Guido Wolfs 1439537
Tim de Keijzer 1422987


Practical exercises week 1:

1. On the robot-laptop open rviz. Observe the laser data. How much noise is there on the laser? What objects can be seen by the laser? Which objects cannot? How do your own legs look like to the robot.

The laser data shows with point clouds highlighted in red where objects are. In a constant environment the laser data is still quite fluid and dynamic. This indicates that there is significant noise on the robot. As of our own legs, they are visible as two small rectangles.

2. Go to your folder on the robot and pull your software.

This has been done.

3. Take your example of don't crash and test it on the robot. Does it work like in simulation?

After significant tweaking and updates, yes the simulation works.

4. Take a video of the working robot and post it on your wiki.

Video: https://youtu.be/FXOC3232ob8

Navigation Assignment 1:

Figure 1. Updated maze

To improve the efficiency of finding the shortest path through the maze, it is suggested to decrease the number of nodes. By reducing the number of nodes, the algorithm will have to evaluate fewer nodes, which will make it more efficient.

The number of nodes can be reduced by removing nodes that are between the nodes that represent a straight line in the maze, as illustrated in Figure 1. This method results in a maze with 19 nodes, and the connections between them are indicated by red dotted lines. It is important to note that the cost of the straight paths should be updated to match the original cost of the same path. For example, the path between node 1 and 2 in the maze of Figure 1 would have a cost of 6.






Navigation Assignment 2:

The code for the Navigation-2 assignment can be found in the group repository in Gitlab, within the folder "Navigation-2".

The idea behind the used approach is based on the Artificial Potential Field Algorithm. The laser-data of the robot is used to detect objects. When these objects are within a specified range of the robot, a vector from the object to the robot is created. This vector is inversely scaled based on the distance of the object to the robot. In this approach local coordinates are used with the robot at the center of the coordinate frame. All the calculated vectors are stored in a list.

Coordinates for a goal are also defined. These are however globally defined, and are converted to local coordinates based on the odometry data of the robot. A vector from the robot to the goal is also drawn, this vector is however scaled to a fixed length, and not dependent on the distance of the robot towards the goal.

The list of vectors is summed to create a resulting force vector. Based on the orientation of this vector, the angular velocity of the robot is adjusted. The forward velocity is normally kept constant, however when the robot needs to make a very sharp turn, the robot decides to decrease the forward velocity and thus also decreasing it's turn radius.


This concept idea has resulted in the robot being able to solve the corridor maze on the simulator:

Video: https://youtu.be/9UnAluLiUOM

As well as that the robot is able to solve the corridor maze in real life during experimentation:

Video: https://youtu.be/Q__zXivK0Xg


Practical exercises week 2: