Integration Strategy Robotic Drone Referee

From Control Systems Technology Group
Jump to navigation Jump to search

As many different skills have been developed over the course of the project, a combined implementation has to be made in order to give a demonstration. To this end, the integration strategy as lined out in this chapter was applied. This is not to be the same strategy used for the integration further along in the project, just for use in the demo.

Concepts and Difficulties

While not strictly necessary, the system will perform better if functionalities are executed concurrently. Especially if the extension to real-time monitoring with multiple agents and distributed computation has to be made, it is important that multiple skills can be executed at once. For this purpose the choice was made to work with Simulink, as M-files can only be run in series. Furthermore, Simulink allows for a clear overview of the information flows and the model composition through the use of sub systems.

This restricted choice between Simulink and M-files came from the fact that all our functionality was implemented in Matlab. If a more general programming language had been used, real-time operating system such as ROS could be applicable.

Implementation of Complex Functions

Initially, not a lot of attention was paid to the future implementation in Simulink and the restriction placed upon Simulink functions. During implementation, problems arose because of the use of functions and mechanisms that were unavailable for the Simulink code generation. Simulink knows two solutions for this problem.

  • Simulink S-functions: Basically creating a new Simulink block and this also supports languages other than M-code, such as C, C++ and FORTRAN. Used in case of stand-alone applications and so on.
  • Extrinsic function calls: The code generation will point any function call of an extrinsic function back to Matlab for evaluation. This works only on a system with Matlab installed.

As S-functions require adjustment of the code and study of all the possible parameters, for now complex functions were flagged as being extrinsic functions. While this yields a working demo, it is not very good engineering practice and in future developments more robust and futureproof solutions should be researched, maybe even moving away from Matlab.

Model Composition

Using the strategy described in the previous Section, all the developed functionality was incorporated into Simulink subsystems. Using the connection prescribed in the System Architecture, the blocks could be placed and connected. A few concessions had to be made, for example the combining of the detection and refereeing skill. These had to be placed in one subsystem as one output of the detection skill was not of constant size, which Simulink does not allow outside of a block.

During implementation of functionality it was discovered that a clear way of communicating and displaying results and the drone state had not yet been achieved. For this reason two video displays were added. Once shows the live camera feed and highlights the detected features. The second shows the soccer field and displays the drone, the ball and a flag on whether the ball is in or out of pitch.

Hardware Integration

In the integration of the hardware several problems were encountered. Foremost of these was the difficulty getting the drone camera feed into Matlab. Secondly, the hardware added needed to be placed on the drone and powered while being protected from falls and the propellers. Additionally, there were problems with the magnetometer on the drone freezing, giving problems obtaining the Yaw angle. Lastly, as there were many sensors to connect to, a solution had to be found to ensure that communication went smoothly.

The problem with the drone camera is that the used codec is not officially supported by Matlab, so most available solutions are slow at about a few Hz. A very nice solution using a node.js script was found online, but this required us to connect to the drone while the drone supports only one connection at a time. As this was outside of the available expertise and there was no time for in depth research, a raspberry Pi with a camera was used to stream images to Matlab to replace the drone camera. Somebody else did manage to get the drone camera working at 15 Hz using the program already used to connect to the drone. In this project this was not tested, but in the next project it could be applied.

The hardware required on the drone were the tag used by the localization system and the Raspberry Pi plus camera to stream camera images to the PC. Additionally, a USB power supply with at least two outputs of 0.5A and 1A were required to power this hardware. As the drone should be able to lift it, a weight restriction of 400 grams was put based on reports from other users. The hardware plus power supplies weight about 250 grams, so 150 grams was left for the container. To this end a foam box was created, in similar fashion to the material of the drone. In this box the hardware could be placed. Any remaining space was filled with packing peanuts and using elastic bands the box was closed. The power supply was much less fragile and was placed on the outside of the box. The foam box weight less than 50 grams, and was thus more than light enough. By cutting ports and openings in the box, the camera connector and USB ports on the hardware are still reachable.

A known problem with AR drones is the freezing magnetometer. Usually within minutes, the incoming values freeze and a drone restart is required. In order to still get an accurate Yaw angle, two solutions were researched.

  • An existing script which uses a color pattern on the drone and the camera placed over the Robocup field to get the Yaw angle
  • Placing another magnetometer on the Raspberry Pi and sending the values to Matlab

The Top View camera was chosen as this was a proven solution and adding extra communication from the Raspberry Pi to Matlab could be more complex than expected. However, during testing it turned out that the camera script was very sensitive to other objects, such as a ball or a person being in the pitch. At this point it was too late to try the other option, so the Yaw angle was manually adjusted.

In the demo the intention was to fly the drone while running the refereeing software. To control the drone, an already tested controller was provided. This controller uses a Simulink solution to connect to the drone and to read the raw sensor data. Using a stabilizing controller, outputs for the four propellers are generated. While the inner (stabilizing) loop was already finished and tested, the outer loop was at the time writing not yet finished. In order to run the controller on the drone, the Simulink model ran in external mode, meaning that Simulink becomes an interface while the generated code runs on an external platform. In this model it was not possible to add the refereeing functions. Furthermore, the model required a Wi-Fi connection to the drone while for the refereeing functions are connection to the Raspberry Pi was required. To solve this dilemma, the following construction was proposed. Two computers would both run an instance of Matlab, one running the drone controller and the other one the refereeing demo, with the two instances sharing information over and UDP connection. This is shown in Figure 1.

Figure 1: Architecture of the Demo

A router is used to set up a LAN network to which the Raspberry Pi and the two PCs connect. This way the PCs can share information over UDP sockets in both Simulink models and PC2 can reach the Pi using an SSH connection. Additionally, PC2 can connect to the Top View Camera using Ethernet to obtain the Yaw angle.

Performance and Testing

After integration the model was tested on the Robocup field using all the hardware. The method was to strictly control the inputs and ensure that the output was correct. It was difficult to test sub systems, especially the detection and out of pitch detection, during development. For these reasons, the debugging period was required. After a few debugging sessions, a robust system was achieved. The combination of the line detection and refereeing is sensitive and can sometimes yield a false value. By adding a filter that looks at the recent history of the algorithm, these single false values are suppressed.

The performance of the model was quite good. We were able to process up 25 frames per second with line detection. In the middle of the field, when only the ball detection is run, the complete output of the raspberry camera could be handled. The model was run on a high-end HP laptop, so extrapolating to specific image processing hardware, handling the camera feeds of multiple drones seems reachable.

For a complete overview of the implementation, the demo Simulink model is available in the repository. As the system architecture was followed within the scope, the model should be self-evident and understandable.