https://cstwiki.wtb.tue.nl/api.php?action=feedcontributions&user=Tolcer&feedformat=atomControl Systems Technology Group - User contributions [en]2024-03-28T12:01:14ZUser contributionsMediaWiki 1.39.5https://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee&diff=46261Robotic Drone Referee2017-11-09T13:32:41Z<p>Tolcer: /* Links */</p>
<hr />
<div><div align="left"><br />
<font size="5">Robotic Drone Referee Project</font><br /><br />
[[File:Drone_snow.png|right|thumb|350px]]<br />
<font size="4">'Soccer Referee'</font><br />
</div><br />
<br />
=Abstract=<br />
<br />
<br />
<p>Refereeing any kind of sport is not an easy job and the decision making procedure involves a lot of variables that cannot be fully taken into account at all times. Human refereeing has a lot of limitations but it has been the only way of proceeding until now. Due to the lack of information, referees sometimes make wrong decisions that can sometimes change the flow of the game or even make it unfair. The purpose of this project is to develop an autonomous drone that will serve as a referee for any kind of soccer match. The robotic referee should be able to make objective decisions taking into account all the possible information available. Thus, information regarding the field, players and ball should be assessed in real-time. This project will deliver an efficient, innovative, extensible and flexible system architecture able to cope with real time requirements and well-known robotic systems constraints.</p><br />
----<br />
=Introduction - Project Description=<br />
<p><br />
This project was carried out for the second module of the 2015 MSD PDEng program. The team consisted of the following members:<br />
* Cyrano Vaseur ('''Team Leader''')<br />
* Nestor Hernandez <br />
* Arash Roomi<br />
* Tom Zwijgers<br />
The goal was to create a system architecture as well as provide a proof of concept in the form a demo. <br />
</p><br />
<br />
<p><br />
*'''Context''': The demand for objective refereeing in sports is continuous. Nowadays, more and more technology is used for assisting referees in their judgement on a professional level, e.g. Hawk-Eye and goal-line technology. As more and more technology is applied, this might someday lead to autonomous refereeing. Application of such technology however will most likely lead to disagreements. Nevertheless, a more acceptable environment for such technology is that of robot soccer (RoboCup). Development of autonomous referee in this context is a first step towards future applications in actual sports on a professional level.<br />
<br />
*'''Goal''': Therefore the goals of this project are: <br />
**To develop a System Architecture (SA) for a Drone Referee system <br />
**Realize part of the SA to prove the concept<br />
</p><br />
----<br />
<br />
=System Architecture=<br />
<br />
<p><br />
Any ambitious long-term project starts with a vision of what the end product should do. For the robotic drone referee this has taken the form of the System Architecture presented in <br />
[[System Architecture Robotic Drone Referee|the System Architecture section]]. The goal is to provide a possible road map and create a framework to start development, such as the proof of concept described later on in this document. Firstly the four key drives behind the architecture are discussed and explained. In the second part a detailed description and overview of the proposed system is given.<br />
</p><br />
<br />
==System Architecture - Design Choices==<br />
<p><br />
[[System Architecture Robotic Drone Referee#System Architecture - Design Choices|System Architecture - Design Choices]]<br />
</p><br />
<br />
==Detailed System Architecture==<br />
<p>[[System Architecture Robotic Drone Referee#Detailed System Architecture|Detailed System Architecture]]</p><br />
<br />
----<br />
<br />
=Proof of Concept (POC)=<br />
<p><br />
To proof the concept, part of the system architecture is realized. This realization is used to demonstrate the use case. The selected use case is to referee a ball crossing the pitch border lines. To make sure this refereeing task works well, even in the worst cases, a benchmark test is setup. This test involves rolling the ball out of pitch, and after it just crossed the line roll it back into pitch. In [[Proof of Concept Robotic Drone Referee|Proof of Concept]], the use case is specified in further detail.<br />
</p><br />
==Use Case-Referee Ball Crossing Pitch Border Line==<br />
<br />
<p> The goal of the demo is to provide a proof of concept. This will be achieved through a use-case, focusing on a specific situation and make that specific situation works correctly. The details of this use-case are presented in [[Proof of Concept Robotic Drone Referee#Use Case-Referee Ball Crossing Pitch Border Line|Use Case-Referee Ball Crossing Pitch Border Line]]</p><br />
<br />
==Proof of Concept Scope==<br />
<br />
<p>[[Proof of Concept Robotic Drone Referee#Proof of Concept Scope|Proof of Concept Scope]]</p><br />
<br />
==Defined Interfaces==<br />
<br />
<p>[[Proof of Concept Robotic Drone Referee#Defined Interfaces|Defined Interfaces]]</p><br />
<br />
==Developed Blocks==<br />
<br />
<p>In this section, all the developed and tested blocks from the [http://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_Robotic_Drone_Referee#Detailed_System_Architecture| System Architecture] are described.</p><br />
<br />
===Rule Evaluation===<br />
<br />
<p>Rule evaluation encloses all refereeing soccer rules that are currently taken into account in this project. The rule evaluation set consists of the following:</p><br />
<br />
# [[Refereeing Out of Pitch]]<br />
<br />
===World Model - Field Line predictor===<br />
<br />
<p>The detection module needs a prediction module to predict the view that the camera on drone would have in each moment. Because the localization method does not use image processing and there is no need to provide data of camera view other than side lines, the line predictor block just provides data of visible side lines, visible to drone camera.[http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor [Continue...]] </p><br />
<br />
===Detection Skill===<br />
<br />
<p>The Detection skill is in charge of the vision-based detection within the frame of reference. Currently, it consists of the following developed sub-tasks:</p><br />
<br />
# [[Ball Detection]]<br />
# [[Line Detection]]<br />
<br />
===World Model - Ultra Wide Band System (UWBS) - Trilateration===<br />
<br />
<p><br />
One of the most important building blocks for the drone referee is a method for positioning. At all times, the drone state, namely the set {X,Y,Z,Yaw}, should be known in order to perform the refereeing duties. Of the drone state, Z and the Yaw are measured by either the drone sensor suite or other programs as they are required for the low-level control of the drone. However, in order to localize w.r.t. the field and to find X and Y, a solution has to be found. To this end, several concepts were composed. Of those concepts, trilateration using Ultra Wide Band Anchors (UWB) was realized. For more details, go to [[Ultra Wide Band System - Trilateration]]. First, the rejected concepts are shortly listed, followed by a detailed explanation of the UWB system.<br />
</p><br />
<br />
==Integration==<br />
<br />
<p><br />
As many different skills have been developed over the course of the project, a combined implementation has to be made in order to give a demonstration. To this end, the integration strategy was developed and applied. While it is not recommended that the same strategy is applied throughout the project, the process was documented in [[Integration Strategy Robotic Drone Referee]].<br />
</p><br />
<br />
==Tests & Discussion==<br />
<br />
<p>The test consisted of a simultaneous coverage of the developed blocks so far: Rule Evaluation, Field Line Predictor, Detection Skill and Trilateration.</p><br />
<br />
<p>The test was conducted in a 'Robocup' small soccer field (8 by 12 meters) and comprised of:</p><br />
<br />
* Trilateration set up and configuration<br />
* Localization testing<br />
* Ball detection and localization<br />
* Out of Pitch Refereeing<br />
<br />
The performed test can be viewed [https://youtu.be/_VaHlZv1tgI here].<br />
<br />
The conclusions of the test (demo) are the following:<br />
<br />
* The localization method based on trilateration is very robust and accurate but it should be tested in bigger fields.<br />
* The color based detection works really good with no players or other objects in the field. Further testing including robots and taking into account occlusion should be considered.<br />
* The out of pitch refereeing is very sensitive to the psi/yaw angle of the drone and the accuracy of the references provided by the Field Line estimator.<br />
* Improvement in distinguising between close parallel lines should be continued.<br />
* Increasing localization accuracy and refining the Field Line estimator should be researched.<br />
<br />
==Researched Blocks==<br />
<br />
<p>In this section, all the blocks from the [[System_Architecture_Robotic_Drone_Referee#Detailed_System_Architecture| System Architecture]] that were researched, but are yet to be implemented, are described. Also we include in this section all the discarded options that might be interesting or useful to take into account in the future.</p><br />
<br />
===Sensor Fusion===<br />
<br />
<p>In the designed system there are some environment sensing methods that give the same information about the environment. It can be proven that measurement updates can increase accuracy of a probabilistic function.<br />
As an example, Localization block can use UltraWide band and acceleration sensors to localize the position of drone. This data fusion is desirable because the UltraWide band system has a high accuracy but low response time, and acceleration sensors have lower accuracy but a higher response time.<br />
The other system that can benefit from sensor fusion is the psi angle block. The psi angle is needed for drone motion control and also for other detection blocks. The data of the psi angle are coming from the drone’s magneto meter and gyroscope. The magneto meter gives the psi angle with a rather high error. The gyroscope gives the derivation of the psi angle. The data provided by the gyroscope has a high accuracy but because it is derivation of psi angle the uncertainty increases by time. Sensor fusion can be used in this case to correct data coming from both sensors. [http://cstwiki.wtb.tue.nl/index.php?title=Sensor_Fusion#Method continue]<br />
</p><br />
<br />
===Cascaded Classifier Detection===<br />
<br />
<p>The use of classifiers using multiple learning algorithms such as the Viola-Jones algorithm could be used to obtain a predictive performance in recognizing patterns and features in images. For this purpose, a [http://mathworks.com/help/vision/ug/train-a-cascade-object-detector.html cascaded object detector] was researched. After a proper training, the object detector should be able to uniquely identify in real-time a soccer ball within the expected environment. During this project several classifiers were trained but the result was later discarded because of the following reasons:</p><br />
<br />
* Very sensitive to lighting changes<br />
* Not enough data set for an effective training<br />
* Little knowledge about how these algorithms really work<br />
* Difficult to find the trade-off when defining the acceptance criteria for a trained classifier<br />
<br />
<p>Nevertheless, the results were promising but not robust enough to be used at this stage. In the future, this method for detection could be further researched in order to overcome the mentioned drawbacks.</p><br />
<br />
===Position Planning and Trajectory planning===<br />
<br />
<p> <br />
Creating a reference for the drone is not as simple as just following the ball, especially in the case of multiple drones. While the ball is of interest, many objects in the field are. For instance, if near a line or the goal, it is better to have this in frame. In case of a player it might be better to be prepared for a kick. Furthermore, with multiple drones the references and trajectory paths should never cross and the extra drones should be in positions of use.<br />
</p><br />
<br />
<p><br />
As this is a complex problem, it should be best to have a configurable solution. In case a drone gets another assignment, from following the ball to watching the line, the same algorithm should be applicable. At the beginning of the project, a good positioning solution was within the scope, but as the project progressed it was omitted. For this reason a conceptual solution was devised, but no implementation was made. The proposal was to use a weighted algorithm, different factors should influence the position, and the influence should be adjustable using weights. Based on these criteria, a potential field algorithm was selected from a few candidates.<br />
</p><br />
<br />
<p> <br />
In a potential field algorithm, an artificial potential or gravitational field is overlaid on the actual world map. The agent using the algorithm will go from A to B following this field. So an obstacle is represented a repulsive object while other things can be classified as attracting objects. In this way it is used for path planning. The proposed implementation would be different in the way that it would be used to find the reference position to go to. For instance, the ball and the goal would be an attractor while other drones could be repulsors. By configuring the strength of the different attractors and repulsors, a different drone task could be represented. The implementation requires more study as well as some experimentation. This required time that was not available. However, it could be a possible area of interest to other to continue with this project.<br />
</p><br />
<br />
<p><br />
In the scope of this project trajectory planning never received a lot of attention. As the purpose for the demo was always to implement only one drone, the trajectory could just be a straight line. However, in case of other drones, jumping players or bouncing balls, this is not sufficient. If a potential field algorithm is already being researched, it could of course also be implemented for the trajectory planning. However, swarm-based flying and intelligent positioning are fields in which a lot of research is conducted right now. As such, for trajectory planning it might be better to do more extensive research into the state of the art.<br />
</p><br />
<br />
===Motion Control===<br />
<p><br />
Drone control is decomposed into high level control (HLC) and low level control (LLC). High level (position) control converts desired reference for position into the desired/reference for drone pitch and yaw angle. Low level control then converts this desired drone pitch and roll into required motor PWM inputs to steer the drone to the desired position. In this section HLC only is discussed.<br />
<br /><br /><br />
''High Level Control Design'' <br /><br />
For the high level control, cascaded position-velocity control is applied. This is illustrated in Figure 1. The error between reference position X<sub>ref</sub> and measured position X<sub>meas</sub> is multiplied by K<sub>p</sub> to generate a reference velocity V<sub>ref</sub>. This reference is compared with the measurement X<sub>meas</sub>. The error is multiplied with K<sub>f</sub> to generate the desired force F to move the drone to the correct position. This force F is divided by the total drone trust T to obtain the pitch (θ) and roll (φ) angles (input for the LLC). The relation between the angles, trust T and force F this is also explained well in the book by Peter Corke<ref>P. Corke, Robotics, Vision and Control, Berlin Heidelberg: Springer-Verlag , 2013.</ref>. The non-linear equation is:<br />
<br /><br /><br />
F<sub>x</sub> = T*sin(θ)<br />
<br /> <br />
F<sub>y</sub> = T*sin(φ) <br />
<br /><br /> <br />
However, with small angle deviations (below 10°) the equations simplify to:<br />
<br /><br /> <br />
F<sub>x</sub> = T*θ<br />
<br /> <br />
F<sub>y</sub> = T*φ<br />
<br /><br /> <br />
Position measurements can be gathered from the WM-Trilateration block. Measured velocities can be taken as the derivative of position measurements. <br />
The challenge in designing this configuration is in tuning K<sub>p</sub> and K<sub>f</sub>.<br />
[[File:ControllorOverviewRoboticDroneReferee.png|1000px|thumb|center|Figure 1: High level control configuration]]<br />
</p><br />
<br />
=Discussion=<br />
<br />
<p><br />
In this section some of the most important improvement points for the referee are listed. These issues were encountered, but because of lack of time, or because they were out of scope, no solution was applied. For others, to continue with this project it is good to be aware of the current limitations and problems with the hardware and the software.<br />
</p><br />
<br />
<p><br />
Proposed improvements for the current rule evaluation algorithms:<br />
* Second layer testing modules:<br />
** Take into account line width and ball radius: For more precise evaluation on ball out of pitch, a second layer is required taking into account line width and ball radius.<br />
** Develop goal post detection block: For detecting a goal score, a second layer is necessary to evaluate if a ball crossing the back line is a ball out of pitch or indeed a goal score. For this detection of the goal post is necessary.<br />
</p><br />
<br />
<p><br />
Proposed improvements to the sensor suite:<br />
* Improve altitude measurements: With additional weight on the drone, altitude measurements seemed unreliable. Therefore, in the proof of concept drone altitude is set and updated by hand. One possible solution for more reliable measurements of altitude is to extend the UWBS trilateration system to also measure drone altitude.<br />
* Improve yaw-angle measurement and control:<br />
** Current magnetometer unreliable: Hardware insufficient: Unpredictable freezes: The magnetometer on the drone freezes unpredictably.<br />
** Current top camera unreliable: Distortions in measurements: The top camera is used for measuring the yaw angle of the drone based on color. However, below the top camera a net is cast (partially wired). This causes light reflection causing distortions in the measurements.<br />
*An attempt was made to apply sensor fusion (Kalman filtering) with these measurements. However, due to the malfunctions in the hardware (unpredictable freezes) there was not much success with this. Therefore, in the proof of concept also yaw angle is set and updated by hand. Hence, new methods for measuring drone yaw are needed. One possible simple solution is applying a better quality magnetometer.<br />
* Establish drone self-calibration: Drone sensor outputs are extremely temperature sensitive. Therefore, drone calibration parameters require regular update. For the system to work properly without interruptions for calibration, drone self-calibration is required. A direction for a possible solution is to use Kalman filtering for prediction of both state and measurement bias. <br />
</p><br />
<br />
<p><br />
Implementation of autonomous drone motion control: <br />
* Improve low level control: Drone low level control currently is confined. For open outer loop, there still exists drift. This is possibly due to faulty measurements as mentioned already. Consequently these need to be resolved first.<br />
* Test high level control configuration: For high level control/ outer loop position control, a cascaded position velocity loop has been developed already. This however needs to be tested next. <br />
</p><br />
<br />
<p><br />
To avoid the issues encountered during system integration, the following changes are proposed: <br />
* Improve simultaneous drone control and camera feed input: One obstacle ran into has been the drone control while simultaneously getting camera feed from the onboard cameras. Possible solutions for this problem are:<br />
** Extending the script written by Daren Lee which is used for reading drone sensor data: The script could be extended to read camera feed next to navigation data.<br />
** Read feed from auxiliary camera through second pc connected through udp. This is a less elegant solution however. <br />
* Write code in C/C++: For the demo, to run all realized blocks concurrently as desired, integration is done in Simulink. This led to some difficulties. Certain matlab functions could not be used/called in Simulink without some harsh bypassing, i.e. using Matlab coder.extrinsic. For that reason it is advised to future developers to write code in C/C++.<br />
<br />
</p><br />
<br />
= Links =<br />
<p><br />
For the repository go to [https://github.com/nestorhr/MSD2015/ Github].<br />
</p><br />
<br />
<p><br />
For the 01/04/16 demo footage click [https://youtu.be/_VaHlZv1tgI here].<br />
</p><br />
<br />
<p><br />
For the documentation of the next generation system document click [http://cstwiki.wtb.tue.nl/index.php?title=Autonomous_Referee_System here].<br />
</p><br />
<br />
=Notes=<br />
<references /><br />
----</div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee&diff=46260Robotic Drone Referee2017-11-09T13:31:54Z<p>Tolcer: /* Links */</p>
<hr />
<div><div align="left"><br />
<font size="5">Robotic Drone Referee Project</font><br /><br />
[[File:Drone_snow.png|right|thumb|350px]]<br />
<font size="4">'Soccer Referee'</font><br />
</div><br />
<br />
=Abstract=<br />
<br />
<br />
<p>Refereeing any kind of sport is not an easy job and the decision making procedure involves a lot of variables that cannot be fully taken into account at all times. Human refereeing has a lot of limitations but it has been the only way of proceeding until now. Due to the lack of information, referees sometimes make wrong decisions that can sometimes change the flow of the game or even make it unfair. The purpose of this project is to develop an autonomous drone that will serve as a referee for any kind of soccer match. The robotic referee should be able to make objective decisions taking into account all the possible information available. Thus, information regarding the field, players and ball should be assessed in real-time. This project will deliver an efficient, innovative, extensible and flexible system architecture able to cope with real time requirements and well-known robotic systems constraints.</p><br />
----<br />
=Introduction - Project Description=<br />
<p><br />
This project was carried out for the second module of the 2015 MSD PDEng program. The team consisted of the following members:<br />
* Cyrano Vaseur ('''Team Leader''')<br />
* Nestor Hernandez <br />
* Arash Roomi<br />
* Tom Zwijgers<br />
The goal was to create a system architecture as well as provide a proof of concept in the form a demo. <br />
</p><br />
<br />
<p><br />
*'''Context''': The demand for objective refereeing in sports is continuous. Nowadays, more and more technology is used for assisting referees in their judgement on a professional level, e.g. Hawk-Eye and goal-line technology. As more and more technology is applied, this might someday lead to autonomous refereeing. Application of such technology however will most likely lead to disagreements. Nevertheless, a more acceptable environment for such technology is that of robot soccer (RoboCup). Development of autonomous referee in this context is a first step towards future applications in actual sports on a professional level.<br />
<br />
*'''Goal''': Therefore the goals of this project are: <br />
**To develop a System Architecture (SA) for a Drone Referee system <br />
**Realize part of the SA to prove the concept<br />
</p><br />
----<br />
<br />
=System Architecture=<br />
<br />
<p><br />
Any ambitious long-term project starts with a vision of what the end product should do. For the robotic drone referee this has taken the form of the System Architecture presented in <br />
[[System Architecture Robotic Drone Referee|the System Architecture section]]. The goal is to provide a possible road map and create a framework to start development, such as the proof of concept described later on in this document. Firstly the four key drives behind the architecture are discussed and explained. In the second part a detailed description and overview of the proposed system is given.<br />
</p><br />
<br />
==System Architecture - Design Choices==<br />
<p><br />
[[System Architecture Robotic Drone Referee#System Architecture - Design Choices|System Architecture - Design Choices]]<br />
</p><br />
<br />
==Detailed System Architecture==<br />
<p>[[System Architecture Robotic Drone Referee#Detailed System Architecture|Detailed System Architecture]]</p><br />
<br />
----<br />
<br />
=Proof of Concept (POC)=<br />
<p><br />
To proof the concept, part of the system architecture is realized. This realization is used to demonstrate the use case. The selected use case is to referee a ball crossing the pitch border lines. To make sure this refereeing task works well, even in the worst cases, a benchmark test is setup. This test involves rolling the ball out of pitch, and after it just crossed the line roll it back into pitch. In [[Proof of Concept Robotic Drone Referee|Proof of Concept]], the use case is specified in further detail.<br />
</p><br />
==Use Case-Referee Ball Crossing Pitch Border Line==<br />
<br />
<p> The goal of the demo is to provide a proof of concept. This will be achieved through a use-case, focusing on a specific situation and make that specific situation works correctly. The details of this use-case are presented in [[Proof of Concept Robotic Drone Referee#Use Case-Referee Ball Crossing Pitch Border Line|Use Case-Referee Ball Crossing Pitch Border Line]]</p><br />
<br />
==Proof of Concept Scope==<br />
<br />
<p>[[Proof of Concept Robotic Drone Referee#Proof of Concept Scope|Proof of Concept Scope]]</p><br />
<br />
==Defined Interfaces==<br />
<br />
<p>[[Proof of Concept Robotic Drone Referee#Defined Interfaces|Defined Interfaces]]</p><br />
<br />
==Developed Blocks==<br />
<br />
<p>In this section, all the developed and tested blocks from the [http://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_Robotic_Drone_Referee#Detailed_System_Architecture| System Architecture] are described.</p><br />
<br />
===Rule Evaluation===<br />
<br />
<p>Rule evaluation encloses all refereeing soccer rules that are currently taken into account in this project. The rule evaluation set consists of the following:</p><br />
<br />
# [[Refereeing Out of Pitch]]<br />
<br />
===World Model - Field Line predictor===<br />
<br />
<p>The detection module needs a prediction module to predict the view that the camera on drone would have in each moment. Because the localization method does not use image processing and there is no need to provide data of camera view other than side lines, the line predictor block just provides data of visible side lines, visible to drone camera.[http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor [Continue...]] </p><br />
<br />
===Detection Skill===<br />
<br />
<p>The Detection skill is in charge of the vision-based detection within the frame of reference. Currently, it consists of the following developed sub-tasks:</p><br />
<br />
# [[Ball Detection]]<br />
# [[Line Detection]]<br />
<br />
===World Model - Ultra Wide Band System (UWBS) - Trilateration===<br />
<br />
<p><br />
One of the most important building blocks for the drone referee is a method for positioning. At all times, the drone state, namely the set {X,Y,Z,Yaw}, should be known in order to perform the refereeing duties. Of the drone state, Z and the Yaw are measured by either the drone sensor suite or other programs as they are required for the low-level control of the drone. However, in order to localize w.r.t. the field and to find X and Y, a solution has to be found. To this end, several concepts were composed. Of those concepts, trilateration using Ultra Wide Band Anchors (UWB) was realized. For more details, go to [[Ultra Wide Band System - Trilateration]]. First, the rejected concepts are shortly listed, followed by a detailed explanation of the UWB system.<br />
</p><br />
<br />
==Integration==<br />
<br />
<p><br />
As many different skills have been developed over the course of the project, a combined implementation has to be made in order to give a demonstration. To this end, the integration strategy was developed and applied. While it is not recommended that the same strategy is applied throughout the project, the process was documented in [[Integration Strategy Robotic Drone Referee]].<br />
</p><br />
<br />
==Tests & Discussion==<br />
<br />
<p>The test consisted of a simultaneous coverage of the developed blocks so far: Rule Evaluation, Field Line Predictor, Detection Skill and Trilateration.</p><br />
<br />
<p>The test was conducted in a 'Robocup' small soccer field (8 by 12 meters) and comprised of:</p><br />
<br />
* Trilateration set up and configuration<br />
* Localization testing<br />
* Ball detection and localization<br />
* Out of Pitch Refereeing<br />
<br />
The performed test can be viewed [https://youtu.be/_VaHlZv1tgI here].<br />
<br />
The conclusions of the test (demo) are the following:<br />
<br />
* The localization method based on trilateration is very robust and accurate but it should be tested in bigger fields.<br />
* The color based detection works really good with no players or other objects in the field. Further testing including robots and taking into account occlusion should be considered.<br />
* The out of pitch refereeing is very sensitive to the psi/yaw angle of the drone and the accuracy of the references provided by the Field Line estimator.<br />
* Improvement in distinguising between close parallel lines should be continued.<br />
* Increasing localization accuracy and refining the Field Line estimator should be researched.<br />
<br />
==Researched Blocks==<br />
<br />
<p>In this section, all the blocks from the [[System_Architecture_Robotic_Drone_Referee#Detailed_System_Architecture| System Architecture]] that were researched, but are yet to be implemented, are described. Also we include in this section all the discarded options that might be interesting or useful to take into account in the future.</p><br />
<br />
===Sensor Fusion===<br />
<br />
<p>In the designed system there are some environment sensing methods that give the same information about the environment. It can be proven that measurement updates can increase accuracy of a probabilistic function.<br />
As an example, Localization block can use UltraWide band and acceleration sensors to localize the position of drone. This data fusion is desirable because the UltraWide band system has a high accuracy but low response time, and acceleration sensors have lower accuracy but a higher response time.<br />
The other system that can benefit from sensor fusion is the psi angle block. The psi angle is needed for drone motion control and also for other detection blocks. The data of the psi angle are coming from the drone’s magneto meter and gyroscope. The magneto meter gives the psi angle with a rather high error. The gyroscope gives the derivation of the psi angle. The data provided by the gyroscope has a high accuracy but because it is derivation of psi angle the uncertainty increases by time. Sensor fusion can be used in this case to correct data coming from both sensors. [http://cstwiki.wtb.tue.nl/index.php?title=Sensor_Fusion#Method continue]<br />
</p><br />
<br />
===Cascaded Classifier Detection===<br />
<br />
<p>The use of classifiers using multiple learning algorithms such as the Viola-Jones algorithm could be used to obtain a predictive performance in recognizing patterns and features in images. For this purpose, a [http://mathworks.com/help/vision/ug/train-a-cascade-object-detector.html cascaded object detector] was researched. After a proper training, the object detector should be able to uniquely identify in real-time a soccer ball within the expected environment. During this project several classifiers were trained but the result was later discarded because of the following reasons:</p><br />
<br />
* Very sensitive to lighting changes<br />
* Not enough data set for an effective training<br />
* Little knowledge about how these algorithms really work<br />
* Difficult to find the trade-off when defining the acceptance criteria for a trained classifier<br />
<br />
<p>Nevertheless, the results were promising but not robust enough to be used at this stage. In the future, this method for detection could be further researched in order to overcome the mentioned drawbacks.</p><br />
<br />
===Position Planning and Trajectory planning===<br />
<br />
<p> <br />
Creating a reference for the drone is not as simple as just following the ball, especially in the case of multiple drones. While the ball is of interest, many objects in the field are. For instance, if near a line or the goal, it is better to have this in frame. In case of a player it might be better to be prepared for a kick. Furthermore, with multiple drones the references and trajectory paths should never cross and the extra drones should be in positions of use.<br />
</p><br />
<br />
<p><br />
As this is a complex problem, it should be best to have a configurable solution. In case a drone gets another assignment, from following the ball to watching the line, the same algorithm should be applicable. At the beginning of the project, a good positioning solution was within the scope, but as the project progressed it was omitted. For this reason a conceptual solution was devised, but no implementation was made. The proposal was to use a weighted algorithm, different factors should influence the position, and the influence should be adjustable using weights. Based on these criteria, a potential field algorithm was selected from a few candidates.<br />
</p><br />
<br />
<p> <br />
In a potential field algorithm, an artificial potential or gravitational field is overlaid on the actual world map. The agent using the algorithm will go from A to B following this field. So an obstacle is represented a repulsive object while other things can be classified as attracting objects. In this way it is used for path planning. The proposed implementation would be different in the way that it would be used to find the reference position to go to. For instance, the ball and the goal would be an attractor while other drones could be repulsors. By configuring the strength of the different attractors and repulsors, a different drone task could be represented. The implementation requires more study as well as some experimentation. This required time that was not available. However, it could be a possible area of interest to other to continue with this project.<br />
</p><br />
<br />
<p><br />
In the scope of this project trajectory planning never received a lot of attention. As the purpose for the demo was always to implement only one drone, the trajectory could just be a straight line. However, in case of other drones, jumping players or bouncing balls, this is not sufficient. If a potential field algorithm is already being researched, it could of course also be implemented for the trajectory planning. However, swarm-based flying and intelligent positioning are fields in which a lot of research is conducted right now. As such, for trajectory planning it might be better to do more extensive research into the state of the art.<br />
</p><br />
<br />
===Motion Control===<br />
<p><br />
Drone control is decomposed into high level control (HLC) and low level control (LLC). High level (position) control converts desired reference for position into the desired/reference for drone pitch and yaw angle. Low level control then converts this desired drone pitch and roll into required motor PWM inputs to steer the drone to the desired position. In this section HLC only is discussed.<br />
<br /><br /><br />
''High Level Control Design'' <br /><br />
For the high level control, cascaded position-velocity control is applied. This is illustrated in Figure 1. The error between reference position X<sub>ref</sub> and measured position X<sub>meas</sub> is multiplied by K<sub>p</sub> to generate a reference velocity V<sub>ref</sub>. This reference is compared with the measurement X<sub>meas</sub>. The error is multiplied with K<sub>f</sub> to generate the desired force F to move the drone to the correct position. This force F is divided by the total drone trust T to obtain the pitch (θ) and roll (φ) angles (input for the LLC). The relation between the angles, trust T and force F this is also explained well in the book by Peter Corke<ref>P. Corke, Robotics, Vision and Control, Berlin Heidelberg: Springer-Verlag , 2013.</ref>. The non-linear equation is:<br />
<br /><br /><br />
F<sub>x</sub> = T*sin(θ)<br />
<br /> <br />
F<sub>y</sub> = T*sin(φ) <br />
<br /><br /> <br />
However, with small angle deviations (below 10°) the equations simplify to:<br />
<br /><br /> <br />
F<sub>x</sub> = T*θ<br />
<br /> <br />
F<sub>y</sub> = T*φ<br />
<br /><br /> <br />
Position measurements can be gathered from the WM-Trilateration block. Measured velocities can be taken as the derivative of position measurements. <br />
The challenge in designing this configuration is in tuning K<sub>p</sub> and K<sub>f</sub>.<br />
[[File:ControllorOverviewRoboticDroneReferee.png|1000px|thumb|center|Figure 1: High level control configuration]]<br />
</p><br />
<br />
=Discussion=<br />
<br />
<p><br />
In this section some of the most important improvement points for the referee are listed. These issues were encountered, but because of lack of time, or because they were out of scope, no solution was applied. For others, to continue with this project it is good to be aware of the current limitations and problems with the hardware and the software.<br />
</p><br />
<br />
<p><br />
Proposed improvements for the current rule evaluation algorithms:<br />
* Second layer testing modules:<br />
** Take into account line width and ball radius: For more precise evaluation on ball out of pitch, a second layer is required taking into account line width and ball radius.<br />
** Develop goal post detection block: For detecting a goal score, a second layer is necessary to evaluate if a ball crossing the back line is a ball out of pitch or indeed a goal score. For this detection of the goal post is necessary.<br />
</p><br />
<br />
<p><br />
Proposed improvements to the sensor suite:<br />
* Improve altitude measurements: With additional weight on the drone, altitude measurements seemed unreliable. Therefore, in the proof of concept drone altitude is set and updated by hand. One possible solution for more reliable measurements of altitude is to extend the UWBS trilateration system to also measure drone altitude.<br />
* Improve yaw-angle measurement and control:<br />
** Current magnetometer unreliable: Hardware insufficient: Unpredictable freezes: The magnetometer on the drone freezes unpredictably.<br />
** Current top camera unreliable: Distortions in measurements: The top camera is used for measuring the yaw angle of the drone based on color. However, below the top camera a net is cast (partially wired). This causes light reflection causing distortions in the measurements.<br />
*An attempt was made to apply sensor fusion (Kalman filtering) with these measurements. However, due to the malfunctions in the hardware (unpredictable freezes) there was not much success with this. Therefore, in the proof of concept also yaw angle is set and updated by hand. Hence, new methods for measuring drone yaw are needed. One possible simple solution is applying a better quality magnetometer.<br />
* Establish drone self-calibration: Drone sensor outputs are extremely temperature sensitive. Therefore, drone calibration parameters require regular update. For the system to work properly without interruptions for calibration, drone self-calibration is required. A direction for a possible solution is to use Kalman filtering for prediction of both state and measurement bias. <br />
</p><br />
<br />
<p><br />
Implementation of autonomous drone motion control: <br />
* Improve low level control: Drone low level control currently is confined. For open outer loop, there still exists drift. This is possibly due to faulty measurements as mentioned already. Consequently these need to be resolved first.<br />
* Test high level control configuration: For high level control/ outer loop position control, a cascaded position velocity loop has been developed already. This however needs to be tested next. <br />
</p><br />
<br />
<p><br />
To avoid the issues encountered during system integration, the following changes are proposed: <br />
* Improve simultaneous drone control and camera feed input: One obstacle ran into has been the drone control while simultaneously getting camera feed from the onboard cameras. Possible solutions for this problem are:<br />
** Extending the script written by Daren Lee which is used for reading drone sensor data: The script could be extended to read camera feed next to navigation data.<br />
** Read feed from auxiliary camera through second pc connected through udp. This is a less elegant solution however. <br />
* Write code in C/C++: For the demo, to run all realized blocks concurrently as desired, integration is done in Simulink. This led to some difficulties. Certain matlab functions could not be used/called in Simulink without some harsh bypassing, i.e. using Matlab coder.extrinsic. For that reason it is advised to future developers to write code in C/C++.<br />
<br />
</p><br />
<br />
= Links =<br />
<p><br />
For the repository go to [https://github.com/nestorhr/MSD2015/ Github].<br />
</p><br />
<br />
<p><br />
For the 01/04/16 demo footage click [https://youtu.be/_VaHlZv1tgI here].<br />
</p><br />
<br />
<p><br />
For the improved and next generation version of the system documentation click [http://cstwiki.wtb.tue.nl/index.php?title=Autonomous_Referee_System here].<br />
</p><br />
<br />
=Notes=<br />
<references /><br />
----</div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Autonomous_Referee_System&diff=45730Autonomous Referee System2017-10-24T15:07:56Z<p>Tolcer: </p>
<hr />
<div><div align="left"><br />
<font size="4">'An objective referee for robot football'</font><br />
</div><br />
<br />
<div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_large}}</center></div><br />
__NOTOC__<br />
<br />
<br />
<br />
A football referee can hardly ever make "the correct decision", at least not in the eyes of the thousands or sometimes millions of fans watching the game. When a decision will benefit one team, there will always be complaints from the other side. It is oft-times forgotten that the referee is also merely a human. To make the game more fair, the use of technology to support the referee is increasing. Nowadays, several stadiums are already equipped with [https://en.wikipedia.org/wiki/Goal-line_technology goal line technology] and referees can be assisted by a [http://quality.fifa.com/en/var/ Video Assistant Referee (VAR)]. If the use of technology keeps increasing, a human referee might one day become entirely obsolete. The proceedings of a match could be measured and evaluated by some system of sensors. With enough (correct) data, this system would be able to recognize certain events and make decisions based on these event.<br />
<br />
<br />
The aim of this project is to do just that; making a system which can evaluate a soccer match, detect events and make decisions accordingly. Making a functioning system which could actually replace the human referee would probably take a couple of years, which we don't have. This project will focus on creating a high level system architecture and giving a prove of concept by refereeing a robot-soccer match, where currently the refereeing is also still done by a human. This project will build upon the [[Robotic_Drone_Referee|Robotic Drone Referee]] project executed by the first generation of Mechatronics System Design trainees. <br />
<br />
<br />
To navigate through this wiki, the internal navigation box on the right side of the page can be used. <br />
<br />
<br />
<center>[[File:tumbnail_test_video.png|center|750px|link=https://www.youtube.com/embed/XyRR3rPQ4R0?autoplay=1]]</center><br />
<br />
<br />
=Team=<br />
This project was carried out for the second module of the 2016 MSD PDEng program. The team consisted of the following members:<br />
* Akarsh Sinha<br />
* Farzad Mobini<br />
* Joep Wolken<br />
* Jordy Senden<br />
* Sa Wang<br />
* Tim Verdonschot<br />
* Tuncay Uğurlu Ölçer<br />
<br />
<br />
<br />
<center>[[File:Drone Ref.png|thumb|center|1000px|Illustration by Peter van Dooren, BSc student at Mechanical Engineering, TU Eindhoven, November 2016.]]</center><br />
<br />
=Acknowledgements=<br />
A project like this is never done alone. We would like to express our gratitude to the following parties for their support and input to this project.<br />
<br />
<center>[[File:logoAcknowledgements.png|center|1000px]]</center><br />
<br />
<br />
<br />
<br />
<br />
<br />
<!--<br />
<br />
==Ground Robot==<br />
<br />
[[File:Ground_Robot_specs.png|thumb|right|500px|Ground robot specs]]<br />
<br />
[[File:Ground_Robot_overview.png|thumb|right|400px|Ground robot w.r.t. field]]<br />
<br />
'''Requirements for Ground Robot'''<br />
<br />
<br><br />
<br />
*''Motion:''<br />
** The GR should be able to keep the ball in sight of its Kinect camera. If the ball is lost, GR should try to find it again with the Kinect.<br />
** Since the ball is best tracked with the Kinect, the omni-vision camera can be used to keep track of the players. <br />
<br />
<br><br />
<br />
*''Vision:''<br />
** Position self with respect to field lines<br />
** Detect ball<br />
** Estimate global ball position and velocity<br />
** Detect objects (players) in field<br />
** Estimate global position and velocity of objects<br />
** Determine which team the player belongs to<br />
<br />
<br><br />
<br />
*''Communication:''<br />
: Send to laptop:<br />
:* Ball position + velocity estimate<br />
:* Player position + velocity estimate<br />
:* Player team/label<br />
:* Own position + velocity<br />
:* Own side/home goal<br />
:* Own detection of B.O.O.P. or Collision (maybe)<br />
<br />
: Receive from laptop:<br />
:* Reference position <br />
:* Detection flag<br />
<br />
<br><br />
<br />
*''Extra:''<br />
** Get ball after B.O.O.P.<br />
** Communicate with second Ground Robot<br />
<br />
==Drone==<br />
*AR Parrot Drone Elite Addition 2.0<br />
*19 min. flight time (ext. battery)<br />
*720p Camera (but used as 360p)<br />
*~70° Diagonal FOV (measured)<br />
*Image ratio 16:9<br />
===Drone control===<br />
*Has own software & controller<br />
*Possible to drive by MATLAB using arrow keys<br />
*Driving via position command and format of the input data is a work to do<br />
*x, y, θ position feedback via top cam and/or UWBS<br />
*z position will be constant and decided according FOV<br />
<br />
==Positioning==<br />
<br />
Positioning System block is responsible for creating the reference position of the drone and the ground robot referee based on the information of the players and the ball. The low level controller of the both system will incorporate the reference position as a desired state for tracking purposes. <br />
[[File:Positioning.png|thumb|right|400px|Depiction of the positioning subsystem.]]<br />
Currently : <br />
*Ground referee (Turtle) focuses on ball<br />
*Drone focuses on collision/players<br />
<br />
==Detection==<br />
The fault detection should<br />
*Receive images and estimations of state related parameter from the drone and the ground robot. <br />
*Based on the information, evaluate which of the two rules (BOOP and Collision) are violated.<br />
*Communicate with respective refs the final verdict<br />
** Collaboration with the ground ref<br />
*** Receive estimated<br />
**** Ball Position and velocity <br />
**** Player position and velocity<br />
**** Position of line/ ball boundary<br />
*** Transmit decision flag regarding BOOP <br />
** Collaboration with the drone ref<br />
*** Receive estimated<br />
**** Player position and velocity <br />
**** Ball Position and velocity <br />
*** Transmit decision flag regarding Collision <br />
<br />
<p><br />
===Definition of fault/foul===<br />
The definition of foul/fault or offence is based on the Robo Cup MSL Rule Book <ref> [http://wiki.robocup.org/Middle_Size_League#Rules "Middle Size Robot League Rules and Regulations"] </ref> . Simple physical contact does not represent an offence. Speed and impact of physical contact shall be used to define offence or a foul. There are two cases in which foul detection should be formulated.<br />
*'''Case 1: One of the robots is in possession of the ball'''<br />
[[File:Contact Between Robots.png|thumb|right|450px|Indirect (left) and direct (right) contact between robots. ]]<br />
** A foul will be defined in this case if Robot B impedes the progress of the opponent by <br />
**#Colliding after charging at A with v unit velocity<br />
**#Applying (instantaneous) pushing with ≥ 𝑭 unit force <br />
**#Continuing to push for time ≥ t seconds <br />
**#Knocking the ball off A by sudden (Instantaneous) application of force (≥ 𝑭 unit force)<br />
*Possible ways of measuring these <br />
***Velocity<br />
**#Visual odometry (Image-based Object Velocity Estimation)<br />
***Application of (instantaneous) force<br />
**#Use visual odometry and calculate velocity/ acceleration and include time data. <br />
**#Estimate force accordingly<br />
**Continuous push (B is pushing A)<br />
**#Detect instantaneous application of F unit force<br />
**#Detect if B changes direction of movement within t seconds<br />
**Knocking off ball (only visual data)<br />
**#Detect collision<br />
**#Detect ball and Player A after collision <br />
<br />
*'''Case 2: None of the robots are in possession of the ball''' <br />
[[File:No Robot Has Ball Possession.png|thumb|right|300px|No robot has ball possession.]]<br />
**A foul will be defined in this case if Robot either A or B impedes the progress of the opponent by <br />
**#Colliding with larger momentum (say, pB ≥ pA units) <br />
**#Continues with the momentum the for time ≥ t seconds (dp/dt=0,for t seconds after impact)<br />
**Possible ways of measuring these <br />
***Momentum<br />
***#Use visual odometry to estimate velocity (and elapsed time)<br />
***#Estimate momentum accordingly<br />
***Continuous application of momentum<br />
***#Detect if defaulter changes direction of movement within t seconds<br />
</p><br />
<br />
==Image processing==<br />
===Capturing images===<br />
'''Objective''': Capturing images from the (front) camera of the drone.<br />
<br />
<br />
'''Method''':<br />
*MATLAB<br />
** ffmpeg<br />
** ipcam<br />
** gigecam<br />
** hebicam<br />
* C/C++/Java/Python<br />
** opencv<br />
…<br />
No method chosen yet, but ipcam, gigecam and hebicam are tested and do not work for the camera of the drone. FFmpeg is also tested and does work, but capturing one image takes 2.2s which is way too slow. Therefore, it might be better to use software written in C/C++ instead of MATLAB.<br />
<br />
===Processing images===<br />
'''Objective''': Estimating the player (and ball?) positions from the captured images.<br />
<br />
<br />
'''Method''': Detect ball position (if on the image) based on its (orange/yellow) color and detect the player positions based on its shape/color (?).<br />
<br />
== Top Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
<br />
=References=<br />
<references/><br />
<br />
--></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45729Implementation MSD162017-10-23T07:24:42Z<p>Tolcer: /* Integration */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperat2.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size, object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these estimators generate settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as the unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries the camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is calculated here and fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix of the functional block. Because an always running ''Line Detection'' skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, an additional column is added to the output matrix of the ''Line Estimator'' function to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to the desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to communicate with this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y-axes are obtained. Also, the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The top-cam can stream images with a frame rate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of these tasks and skills are implemented using the built-in Matlab® functions and libraries. Communication between all the components is required to apply communication, queuing and ordering according to the tasks and project aims. Regarding tasks and functions; some of the processings and functions are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible with system architecture. As you can see in the system architecture, the design has layers. To handle the simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram created for the project is given in the following figure.<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px|Simulink Diagram of the overall system]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink diagram, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' functions and transferring parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* The blocks, functions and developed codes are well commented. The details of the algorithms can be examined while examining the source codes.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45728Implementation MSD162017-10-23T07:24:02Z<p>Tolcer: /* Integration */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperat2.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size, object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these estimators generate settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as the unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries the camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is calculated here and fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix of the functional block. Because an always running ''Line Detection'' skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, an additional column is added to the output matrix of the ''Line Estimator'' function to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to the desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to communicate with this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y-axes are obtained. Also, the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The top-cam can stream images with a frame rate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of these tasks and skills are implemented using the built-in Matlab® functions and libraries. Communication between all the components is required to apply communication, queuing and ordering according to the tasks and project aims. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible with system architecture. As you can see in the system architecture, the design has layers. To handle the simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram created for the project is given in the following figure.<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px|Simulink Diagram of the overall system]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink diagram, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' functions and transferring parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* The blocks, functions and developed codes are well commented. The details of the algorithms can be examined while examining the source codes.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45727Implementation MSD162017-10-23T07:22:24Z<p>Tolcer: /* Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperat2.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size, object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these estimators generate settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as the unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries the camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is calculated here and fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix of the functional block. Because an always running ''Line Detection'' skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, an additional column is added to the output matrix of the ''Line Estimator'' function to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to the desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to communicate with this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y-axes are obtained. Also, the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The top-cam can stream images with a frame rate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of these tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering according to the tasks and project aims; communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible with system architecture. As you can see in the system architecture, the design has layers. To handle the simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram created for the project is given in the following figure.<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px|Simulink Diagram of the overall system]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink diagram, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' functions and transferring parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* The blocks, functions and developed codes are well commented. The details of the algorithms can be examined while examining the source codes.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45726Implementation MSD162017-10-23T07:17:46Z<p>Tolcer: /* Top-Camera */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperat2.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to the desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to communicate with this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y-axes are obtained. Also, the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The top-cam can stream images with a frame rate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of these tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering according to the tasks and project aims; communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible with system architecture. As you can see in the system architecture, the design has layers. To handle the simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram created for the project is given in the following figure.<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px|Simulink Diagram of the overall system]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink diagram, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' functions and transferring parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* The blocks, functions and developed codes are well commented. The details of the algorithms can be examined while examining the source codes.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45725Implementation MSD162017-10-23T07:13:31Z<p>Tolcer: /* Integration */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperat2.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of these tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering according to the tasks and project aims; communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible with system architecture. As you can see in the system architecture, the design has layers. To handle the simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram created for the project is given in the following figure.<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px|Simulink Diagram of the overall system]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink diagram, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' functions and transferring parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* The blocks, functions and developed codes are well commented. The details of the algorithms can be examined while examining the source codes.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45723Implementation MSD162017-10-22T23:35:06Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperat2.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of the tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering for the task, to be able to achieve project aims, communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible to system architecture. As you can see in system architecture, the design has layers. To handle simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram given in the following figure is created.<br />
<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink file, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' blocks and parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* Each blocks and functions are well commented. The details of the algorithms can be examined via the source code.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=File:FrameRef_seperat2.png&diff=45722File:FrameRef seperat2.png2017-10-22T23:34:34Z<p>Tolcer: </p>
<hr />
<div></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45721Implementation MSD162017-10-22T23:33:08Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of the tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering for the task, to be able to achieve project aims, communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible to system architecture. As you can see in system architecture, the design has layers. To handle simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram given in the following figure is created.<br />
<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink file, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' blocks and parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* Each blocks and functions are well commented. The details of the algorithms can be examined via the source code.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45720Implementation MSD162017-10-22T23:33:00Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of the tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering for the task, to be able to achieve project aims, communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible to system architecture. As you can see in system architecture, the design has layers. To handle simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram given in the following figure is created.<br />
<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink file, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' blocks and parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* Each blocks and functions are well commented. The details of the algorithms can be examined via the source code.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=File:FrameRef_seperate.png&diff=45719File:FrameRef seperate.png2017-10-22T23:32:32Z<p>Tolcer: uploaded a new version of "File:FrameRef seperate.png"</p>
<hr />
<div></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45718Implementation MSD162017-10-22T23:13:27Z<p>Tolcer: /* Properties of the Simulink File */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of the tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering for the task, to be able to achieve project aims, communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible to system architecture. As you can see in system architecture, the design has layers. To handle simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram given in the following figure is created.<br />
<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink file, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' blocks and parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' command.<br />
* Each blocks and functions are well commented. The details of the algorithms can be examined via the source code.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45717Implementation MSD162017-10-22T23:12:31Z<p>Tolcer: /* Integration */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
As explained in System Architecture part, the project consists of many different subsystems and components. Each component has its own task and functions. All of the tasks and skills are implemented using the built-in Matlab® functions and libraries. To apply communication, queuing and ordering for the task, to be able to achieve project aims, communication between all the components is required. Regarding tasks and functions; some of them are required to work simultaneously, some are consecutive and some others are independent. Additionally, the implementation of the system should also be compatible to system architecture. As you can see in system architecture, the design has layers. To handle simultaneous communication and layered structure, Simulink software is used for programming. In that sense, the Simulink diagram given in the following figure is created.<br />
<br />
<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
== Properties of the Simulink File ==<br />
* In Simulink file, blocks are categorized using ''Area'' utility, according to the functions/tasks that they have.<br />
* The categorization is shown via different colors and these divisions are consistent with the system architecture.<br />
* The interconnections between the functioning blocks are achieved via ''GoTo'' and ''From'' blocks and parameter names are shown explicitly.<br />
* Each individual blocks and their function in this Simulink diagram is explained in these documentations. <br />
* All the functions and blocks under ''Visualizations'' area are not part of the main tasks of the project but are necessary to see the results. Therefore this part is not part of the ''System Architecture''.<br />
* Since almost the all functions and built-in commands in the algorithms are not directly available to Simulink, each Matlab function is called from Simulink using ''extrinsic'' commands.<br />
* Each blocks and functions are well commented. The details of the algorithms can be examined via the source code.<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-function [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simulink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45715Implementation MSD162017-10-22T22:52:58Z<p>Tolcer: /* Drone */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller,accelerometers,altimeter, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45714Implementation MSD162017-10-22T22:51:22Z<p>Tolcer: /* Top-Camera */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (ψ) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45713Implementation MSD162017-10-22T22:51:05Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw (ψ) orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (\psi) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45712Implementation MSD162017-10-22T22:48:10Z<p>Tolcer: /* Top-Camera */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The top camera is a wide angle camera that is fixed above the playing field and able to see the whole field. <br />
<br />
This camera is used to measure the location and orientation of the drone. This measurement is used as feedback for the drone to position itself to a desired location.<br />
<br />
The ‘GigeCam’ toolbox of MATLAB® is used to reach to this camera. To obtain the indoor position of drone, 3 ultra-bright LEDs are placed on top of the Drone. A snapshot image of the field together with the agent is taken with a short exposure time. Then via the processing of this image for searching the pixels illuminated by the LEDs on the Drone, the coordinates on x and y axes are obtained. Also the yaw (\psi) orientation of the drone is obtained according to the relative positions of the pixels.<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45711Implementation MSD162017-10-22T22:44:35Z<p>Tolcer: /* Locating of the Agents : Drone */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45710Implementation MSD162017-10-22T22:44:19Z<p>Tolcer: /* Positioning skills */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. <br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data. The obtained information from the different position measurements are composed in the vector given below and this vector is used as ‘droneState’.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45709Implementation MSD162017-10-22T22:40:35Z<p>Tolcer: /* Line Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the output matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45708Implementation MSD162017-10-22T22:39:54Z<p>Tolcer: /* Ball Size Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel units is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the final matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45707Implementation MSD162017-10-22T22:39:43Z<p>Tolcer: /* Object Size Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel information is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined. The obtained estimated object radius in pixel units is fed into the object detection skill.<br />
<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the final matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45706Implementation MSD162017-10-22T22:39:02Z<p>Tolcer: /* Ball Size Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel information is fed into the ball detection skill.<br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined.<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the final matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45705Implementation MSD162017-10-22T22:38:18Z<p>Tolcer: /* Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
The estimator blocks which defined are ball size , object size and line estimators. Using the recent state of the drone and information about the field of view of the camera and resolution (which are defined in initialization function); these functions generates settings for the line, ball and object detection algorithms to reduce the false positives, errors and processing time of the algorithms. <br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel information is fed into the ball detection size. <br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined.<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the final matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45704Implementation MSD162017-10-22T22:36:05Z<p>Tolcer: /* Line Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel information is fed into the ball detection size. <br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined.<br />
===Line Estimator===<br />
Line Estimator block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''.<br />
<br />
The more detailed information and algorithm behind this estimator is explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the final matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45703Implementation MSD162017-10-22T22:35:35Z<p>Tolcer: /* Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive detections and required processing work; some ''a priori'' knowledge about the objects to be detected is necessary. For example in case of ''circle detection'' the expected diameter of the circles is important. In that manner, the following estimator functions are defined as part of the ''World Model''<br />
<br />
===Ball Size Estimator===<br />
Ball Detection Skill is achieved using ''imfindcircle'' built-in command of the image processing toolbox of MATLAB®. To run this function in an efficient way and reduce the false positive ball detections, the expected ''radius'' of the ball as unit of ''pixels'' in the image should be defined. This can be calculated via the available height information of agent which carries camera, field of view of the camera and the real size of the ball. The height information is obtained from the drone position data and the others are defined in the initialization function. The obtained estimated ball radius in pixel information is fed into the ball detection size. <br />
<br />
===Object Size Estimator===<br />
Very similar to the ball case, the expected size of the objects in pixels are estimated using the drone height and FOV. Instead of the ball radius, here the real size of the objects are defined.<br />
===Line Estimator===<br />
''Line Estimator'' block gives the expected outer lines of the field. This estimator always calculates the relative position of the outer lines corresponding the state of the drone. This position information is coded using Hough Transformation criteria. The line estimator is required for enabling and disabling of the line detection on the outer lines. If some of the Outer Lines are in the field of view of the Drone Camera, then the Line Detection Skill should be enabled. Otherwise should be disabled. This information is also coded in the output matrix. Because, an always running Line Detection skill will produce many positive detected line outputs. The expected positions of the outer lines are not only used for enabling-disabling of the Line Detection Skill. Since the relative orientation and position of the lines are computable, this information is also used to filter out the false positive results of the line detection when it is enabled. The filtered lines then will be used for ''Refereeing Task''. <br />
The more detailed information and algorithm behind this estimator is well explained [http://http://cstwiki.wtb.tue.nl/index.php?title=Field_Line_predictor here]. However, one column is added to the final matrix to show whether the predicted line is an ''end'' or ''side'' line.<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45702Implementation MSD162017-10-22T22:18:46Z<p>Tolcer: /* Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
To reduce the false positive dertections and required processing work, some estimators related to objects to be detected are used.Also the image processing requires some ''a priori'' knowledge about the objects needed to be detected. For ''circle detection'' the expected diameter of the circles is important. During the processing of an image, the detection of the all possible circles is not necessary. Instead, the detection of the circles which are compatible with the ''expected'' size of the object is necessary.<br />
<br />
===Ball Estimator===<br />
===Object Size Estimator===<br />
===Line Estimator===<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45701Implementation MSD162017-10-22T22:11:24Z<p>Tolcer: /* Estimator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
Three subtitles:<br />
-Ball Estimator<br />
-Object Size Estimator<br />
-Line Estimator<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45700Implementation MSD162017-10-22T22:10:24Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. More detailed information about FOV is given in the next sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45699Implementation MSD162017-10-22T22:07:27Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. The FOV information is given in the following sections.<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45698Implementation MSD162017-10-22T22:05:43Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. The relation between the real world coordinates and the FOV is given in the section [[Implementation|Ai-Ball : Imaging from the Drone]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45697Implementation MSD162017-10-22T22:05:10Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|400px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|500px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. The relation between the real world coordinates and the FOV is given in the section :[[Ai-Ball : Imaging from the Drone]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45696Implementation MSD162017-10-22T22:03:49Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drone and known.<br />
* The height of the camera with respect to the drone base is zero.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Note that the position of the camera (center of image) with respect to the origin of the drone is known. It is a drone fixed position vector and lies along x-axis of the drone. Taking into account the principles above and adding the known (measured) drone position to the position vector of the camera (including the yaw \psi orientation of the drone); the position of the image center with respect to the field reference coordinate axes can be obtained. <br />
<br />
The orientation of the received image with respect to the field reference coordinate axes should be fitted according to the figure. <br />
<br />
Now, the calculated pixel units of the detected object, should be converted into real-world units (from pixels to millimeters). However this property changes according to the height of the camera. To achieve this, the height information of the drone should be used. Using the height of the drone and the FOV information of the camera, the ratio of the pixels to millimeters is calculated. The Fov<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45695Implementation MSD162017-10-22T21:46:21Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the field reference coordinate axes. The coordinate system of the image on Matlab is given below. Note that, here, the unit of the obtained position data is pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed furtherly, regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone on drone's roll (φ) and pitch (θ) axes.<br />
* The camera is aligned and fixed in a way that, the narrow edge of the image is parallel to the y-axis of the drone as shown in the figure below. <br />
* The distance from the center of the gravity of the drone (which is its origin) to the camera lies along the x-axis of the drne and known.<br />
* The alignment between camera, drone and the field is shown in the figure below.<br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Taking into account the position of the centre<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45694Implementation MSD162017-10-22T21:40:00Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the reference frame. The coordinate system of the image on Matlab is given below. Note that, here, the obtained data is in pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image.<br />
* The camera is always parallel to the ground plane, neglecting the tilting of the drone in roll and pitch axes.<br />
* The camera is aligned in paralel to the drone obtain the pi<br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45693Implementation MSD162017-10-22T21:38:02Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the reference frame. The coordinate system of the image on Matlab is given below. Note that, here, the obtained data is in pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image neglecting the tilting of the camera <br />
* The camera is aligned with paralel to the obtain the pi<br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45692Implementation MSD162017-10-22T21:37:24Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the reference frame. The coordinate system of the image is given below. Note that, here, the obtained data is in pixels. <br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of the image frame]]<br />
<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image neglecting the tilting of the camera <br />
* The camera is aligned with paralel to the obtain the pi<br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45691Implementation MSD162017-10-22T21:36:50Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
As a result of the ball and object detection skills, the detected object coordinates are obtained as pixels. To define the location of the detected objects in the image, the obtained pixel coordinate should be transferred into the reference frame. The coordinate system of the image is given below. Note that, here, the obtained data is in pixels. <br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
Followingly, the pixel coordinates for the corresponding center of the detected object or ball, calculated according to the center of the image. This data is processed regarding the following principles:<br />
* The center of the image is assumed to be focal center of the camera and this is coincident with the center of the image neglecting the tilting of the camera <br />
* The camera is aligned with paralel to the obtain the pi<br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of the image frame]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45688Implementation MSD162017-10-22T20:08:02Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
<br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of a captured frame]]<br />
[[File:RelativeCoordinates.png|thumb|centre|800px|Fig.: Relative coordinates of captured frame with respect to the reference frame]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=File:RelativeCoordinates.png&diff=45687File:RelativeCoordinates.png2017-10-22T19:54:57Z<p>Tolcer: </p>
<hr />
<div></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45686Implementation MSD162017-10-22T19:46:11Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
[[File:FrameRef_seperate.png|thumb|centre|500px|Fig.: Coordinate axis of a captured frame]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45685Implementation MSD162017-10-22T19:45:22Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
[[File:FrameRef_seperate.png|thumb|centre|500px]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=File:FrameRef_seperate.png&diff=45684File:FrameRef seperate.png2017-10-22T19:39:04Z<p>Tolcer: </p>
<hr />
<div></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45683Implementation MSD162017-10-22T19:38:33Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
[[File:FrameRef_seperate.png]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45682Implementation MSD162017-10-22T19:36:16Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
[[File:FrameRef_seperate.pdf]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=File:FrameRef_seperate.pdf&diff=45681File:FrameRef seperate.pdf2017-10-22T19:35:14Z<p>Tolcer: </p>
<hr />
<div></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45680Implementation MSD162017-10-22T19:29:32Z<p>Tolcer: /* Locating of the Objects : Ball & Player */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
[[File:Example.jpg]]<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcerhttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=45679Implementation MSD162017-10-22T19:12:32Z<p>Tolcer: /* Positioning of the Agents : Drone */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Line Detection ===<br />
The line detection is achieved using [http://nl.mathworks.com/help/images/hough-transform.html Hough transform] technique. <br />
Since the project is a continuation of the MSD 2015 generations [http://cstwiki.wtb.tue.nl/index.php?title=Robotic_Drone_Referee Robotic Drone Referee Project] ; the previously developed line detection algorithm is updated and used in this project. The detailed explanation of this algorithm can be found [http://cstwiki.wtb.tue.nl/index.php?title=Line_Detection here.]<br />
Some updates have been applied to this code, but the algorithm is not changed. The essential update is the separation of the line detection codes from all detection codes which is created by the previous generation and creating the new function as an isolated skill.<br />
<br />
=== Detect balls ===<br />
The flow of the ball detection algorithm is shown in the next figure. First, the camera images are filtered on color. The balls that can be used can be red, orange or yellow; colors that are in the upper-left corner of the CbCr-plane. A binary image is created where the pixels which fall into this corner get a value of 1 and the rest get a value of 0. Next, to do some noise filtering, a dilation operation is performed on the binary image with a circular element with a radius of 10 pixels. Remaining holes inside the obtained blobs are filled. From the obtained image a blob recognition algorithm returns blobs with their properties, such as the blob center and major- and minor axis length. From this list with blobs and their properties, it is determined if it could be a ball. Blobs that are too big or too small are removed from the list. For the remaining possible balls in the list, a confidence is calculated. This confidence is based on the blob size and roundness: <br />
<br />
confidence = (minor Axis / major Axis) * (min(Rblob,Rball) / max(Rblob,Rball))<br />
<center>[[File:ImageProc_ball.png|thumb|center|1250px|Flow of ball detection algorithm]]</center><br />
<br />
=== Detect objects ===<br />
The object (or player) detection works in a similar way as the ball detection. However, instead of doing the color filtering on the CbCr plane, it is done on the Y-axis only. Since the players are coated with a black fabric, their Y-value will be lower than the surroundings. Moreover, the range of detected blobs which could be players is larger for the object detection than it was for the ball detection. This is done because the players are not perfectly round like the ball is. If the player is seen from the top, they will appear different than when the are seen from an angle. A bigger range of accepting blobs ensures a lower chance on true negatives. The confidence is calculated in the same fashion as in the ball detection algorithm.<br />
<br />
<center>[[File:ImageProc_objects.png|thumb|center|1250px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
== Refereeing ==<br />
<br />
=== Ball Out of the Pitch Detection ===<br />
Following the detection of the ball and the boundary lines, the ball out of the pitch detection algorithm is called. Similar to the line detection case, the algorithm developed by the previous generation is used essentially. The detailed explanation of this ball out of pitch detection algorithm is given [http://cstwiki.wtb.tue.nl/index.php?title=Refereeing_Out_of_Pitch here].<br />
Although this algorithm provides some information about the ball condition, it is not able to handle to all use cases. At that point, an update and improvement to this algorithm added to handle the cases where the ball position is predicted via the particle filter. Although the ball is not detected by the camera, the position of the ball with respect to the field coordinate system can be known (at least predicted) and based on the ball coordinate information, in/out decision can be further improved. This part added to ball out of pitch refereeing skill function. However, this sometimes yields false positives and false negative results as well. A further improvement for refereeing is still necessary.<br />
<br />
=== Collision detection ===<br />
For the collision-detection we can rely on two sources of information: the world model and the raw images. If we can keep track of the position and velocity of two or more players in the world model, we might be able to predict that they are colliding. Moreover, when we take the images of the playing field and we see in the image that there is no space between the blobs (players), we can assume that they are standing against each other. A combination of both these methods would be ideal. However, since the collision detection in the world model was not implemented, we will only discuss the image-based collision detection. This detection makes use of the list of blobs that is generated in the object detection algorithm. For each blob in this list, the length of the minor- and major axes are checked. The axes are compared to each other to determine the roundness of the object. Moreover, the axes are compared with the minimal expected radius of the player. If the following condition holds, a possible collision is detected:<br />
<br />
if ((major_axis / minor_axis) > 1.5) & (minor_axis >= 2 * minimal_object_radius) & (major_axis >= 4 * minimal_object_radius))<br />
<br />
== Positioning skills ==<br />
Position data of the each component are able to be gotten in diverse methods. In this project, the planar position (x-y-ψ) of the refereeing agents (it is only the drone) is achieved using an ultra bright led strip that is detected by the top camera. The ball position is obtained using image processing and further post-processing of image data. <br />
<br />
=== Locating of the Agents : Drone ===<br />
The drone has 6 degree-of-freedom (DOF). The linear coordinates (x,y,z) and corresponding angular positions roll, pitch, yaw (φ,θ,ψ). <br />
Although the roll (φ) and pitch (θ) angles of the drone are important for the control of the drone itself, they are not important for the refereeing tasks. Because all the refereeing and image processing algorithms are developed based on an essential assumption: Drone angular positions are well stabilized such that roll (φ) and pitch (θ) values are zero. Therefore these 2 position information have not been taken into account. And <br />
<br />
The top camera yields the planar position of the drone with respect to the field reference coordinate frame which includes x, y and yaw (ψ) information. However, to be able to handle refereeing tasks and image processing, the drone altitude should also be known. The drone has its own altimeter and the output data of the altimeter is accessible. The obtained drone altitude data is fused with the planar position data and the following position matrix is obtained for the drone.<br />
<br />
Then the agent position vector can be obtained as: <br />
<br />
<math display="center">\begin{bmatrix} x \\ y \\ \psi \\ z \end{bmatrix}</math><br />
<br />
=== Locating of the Objects : Ball & Player ===<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore the first idea was disassembling it and connecting the camera to a swivel to tilt down 90 degrees. This will cause some change in the structures. Since all the implementation is achieved using MATLAB/Simulink environment, the images of the camera should be reachable by MATLAB environment. However, after some effort and trial and error processes, it is observed that capturing and transferring the images of the drone embedded camera is not easy or straightforward for MATLAB. Even more effort showed that, the use of this drone camera for capturing images is either not compatible with MATLAB or causes a lot of delay. Therefore the idea of swiveled drone camera is abandoned and a new camera system is investigated.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement of the Drone Camera====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements, the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2. Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
Although these measurement are achieved using drone camera, it is not used for the final project because of the difficulty on the getting images using MATLAB. Instead, an alternative camera system is investigated. To be able to have an easy communication and satisfactory image quality, a TCP/IP interface communication-based WiFi camera is selected. This camera is called as AiBall and it is explained in the following section.<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball : Imaging from the Drone ==<br />
As a result of the searches, we finally decided to use a Wi-fi webcam whose details can be found [http://www.thumbdrive.com/aiball/intro.html here].<br />
This solution is a low-weight solution that sends images directly using a wifi connection. To be able to connect the camera, one needs a Wi-Fi antenna. The camera is placed faced down and in front of the drone. To reduce the weight of the added system, the batteries of the camera is removed and its power is supplied by drone using a USB power cable.<br />
<br />
The camera is calibrated using a checker-board shape. The calibration data and functions can be found in the Dropbox folder. <br />
<br />
One of the most important properties of a camera is the field of view (FOV) of the camera. The definition of the FOV is shown above. The resolution of the AiBall is 480p with 4:3 aspect ratio. This yields 640x480 pixels image.<br />
<br />
The diagonal FOV angle of the camera is given as 60°. This information is necessary to know the real world size of the image frame and the corresponding real dimension of the per pixel. This information is embedded into Simulink code while converting measured positions to world cooordinates.<br />
[[File:Fovtableaiball.PNG|thumb|centre|500px]]<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Drone Motion Control==<br />
The design of appropriate tracking control algorithms is a crucial element in accomplishing the Drone referee project. The agents in this projects have two main capabilities which allow them to move and take images. With regards to situation of a game and the agents position, the desired position of each agents are calculated based on the subtask that is assigned to them. Generating the reference point is within the responsibilities of path-planning block which is not covered here. The goal of the Motion control block for the drone is to track effectively the desired drone states (xd,yd,θd) which represent the drone position and yaw angle in global coordinate system. These values as an outputs of path planning block are being set as a reference value for motion control block. As it is shown in Fig.1, the drone states obtained from a top camera which is installed on the ceiling to use as feedback in control system.<br />
The drone height z also should be maintained at a constant level to provide suitable images for image processing block. In this project, planar motion of the drone in (x,y) is interested as the ball and objects in pitch move in 2-D space. Consequently, the desired trajectories of drone are trajectories like straight line while performing aggressive acrobatic maneuvers are not interested. Hence, linear controllers can be applied for tracking planar trajectories.<br />
<br />
[[File:mc-1.jpeg|thumb|centre|750px||fig.1 System Overview]]<br />
<br />
Most linear control strategies are based on a linearization of the nonlinear quadrotor dynamics around an operating point or trajectory. A key assumption that is made in this approach, is that the quadrotor is subject to small angular maneuvers.<br />
As it is shown in fig. 2, drone states (x,y,θ) that are measured by the images from the top camera are compared to the reference values. The high level controller (HLC) then calculates the desired speed of the drone in global coordinate and sends it as an input to the speed controller of the drone that is Low Level Controller (LLC). In this project, HLC is designed and the parameter are tuned to meet a specific tracking criteria. LLC is already implemented in drone. With identification techniques, the speed controller block was estimated to behave approximately as first order filter. Furthermore, incorporation of rotation matrix, the commands that are calculated in global coordinates, are transformed in to drone coordinates and being sent as a Fly commands to the drone.<br />
<br />
[[File:mc-2.jpeg|thumb|centre|750px||fig.2 Drone Moton Control Diagram]]<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
=== High Level & Low Level Controllers ===<br />
<br />
At this level, the controller calculates the reference values of the LLC as an input based on the states errors that needs to be controlled (Fig.3). The input-output diagram of controller for each states are composed of 3 region. In dead zone region, if the error in one direction is less than a predefined value, then the output of the controller is zero. This result in comfort zone that the drone would stay without any motion which corresponds to the dead zone of the controller. If the error is larger than that value, then the output is determined based on the error and derivative of the error with PD coefficients (Fig.4). Since there is no position dependent force in the motion equation of the drone, I action is not necessary for controller. Furthermore, to avoid the oscillation in unstable region of the LLC built in the drone, the errors out of the dead zone region haven’t been offset from the dead zone region. This approach prevent sending small commands in the oscillation region to the drone.<br />
<br />
[[File:mc-3.jpeg|thumb|centre|750px||Fig.3 High Level Controller]]<br />
<br />
It should be noted that, these errors are calculated with respect to the global coordinate system. Hence, the control command first must be transformed in to drone coordinate system with rotational matrix that uses Euler angles.<br />
<br />
[[File:mc-4.jpeg|thumb|centre|750px||Fig.4. Controller for position with respect to global coordinate system]] <br />
<br />
[[File:mc-5.jpeg|thumb|centre|750px|| Fig.5. Comfort Zone corresponds to Dead Zone]] <br />
<br />
<br />
=== Coordinate System Transformation ===<br />
<br />
The most commonly used method for representing the attitude of a rigid body is through three successive rotation angles (Ф,φ,θ) about the sequentially displaced axes of reference frame. These angles are generally referred to as Euler angles. Within this method, the order of rotation around the specific axes is of importance as the sequence of rotations. In the field of automotive and/or aeronautical research, the transformation from a body frame to an inertial frame is commonly described by means of a specific set of Euler angles, the so-called roll, pitch, and yaw angles (RPY).<br />
In this project, since the motion in z-direction is not subject to change and the variation in the pitch and roll angles are small, then the rotation matrix reduced to function of yaw angle.<br />
<br />
[[File:mc-6.jpeg|thumb|centre|750px|| Fig.6 Coordinate Systems Transformation]]<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
[[File:SimulinkModel.png|thumb|centre|1100px]]<br />
<br />
<br />
<br />
<br />
<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Tolcer